Tiny-Diffusion

Table of Contents
Tiny‑Diffusion is a minimal implementation of a diffusion model for generating retro video‑game sprites. It follows the core ideas from Deeplearning.AI’s diffusion intro and adds clean training/sampling utilities.
Implementation detais #
Code on GitHubFeatures #
- Sprite generation at 16×16 pixels (retro style)
- DDPM training + fast DDIM sampling
- Optional context conditioning
- Animated GIFs of the denoising process
Tech stack #
- PyTorch, torchvision
- NumPy, matplotlib, pillow, tqdm
Project structure #
model_training.py— training loop (with/without context), saves checkpointssampling.py— sampling and visualization (DDPM/DDIM)diffusion_utilities.py— UNet/blocks, dataset, helperssprites_1788_16x16.npy— training sprites (NumPy)sprite_labels_nc_1788_16x16.npy— optional context labelsweights/— model checkpoints + generated GIFs
Getting started #
Install deps:
pip install torch torchvision numpy matplotlib tqdm pillow
Train:
python model_training.py
Sample:
python sampling.py
Outputs (checkpoints and GIFs) are written to weights/.
Results #
The cover GIF shows the DDIM denoising trajectory and final sprite samples.

Customization #
- Adjust architecture/hparams in
model_training.pyanddiffusion_utilities.py - Replace the
.npyfiles with your own sprites (matching shapes) - Toggle context conditioning in both training and sampling