Releases: mosaicml/diffusion
Releases · mosaicml/diffusion
v0.1.2
What's Changed
- NoOp Model by @Landanjs in #139
- Script to pre-compute CLIP and T5 by @Landanjs in #144
- Add option to shift noise schedules when changing resolution by @coryMosaicML in #153
- Expose option to set per-stream weighting in image and image_caption datasets by @coryMosaicML in #156
- HF image generation that integrates with Cory's earlier script by @rishab-partha in #158
- MMDiT implementation and text-to-image training with rectified flows by @coryMosaicML in #155
- Add option to use predefined aspect ratio buckets in the cropping transform by @coryMosaicML in #157
- Add latent logger for T5-XXL text encoder by @rishab-partha in #154
- Pass loggers to Trainer in eval by @jazcollins in #166
- Simple LoRA Finetuning (WIP) by @rishab-partha in #164
- Add option to change start and end SNR in SD2/SDXL configs by @coryMosaicML in #165
- Small bug fixes to bulk image generation by @coryMosaicML in #167
- Add dataset for running with precomputed latents from multiple captions by @coryMosaicML in #161
- Small bug fixes for running models without tokenizers by @coryMosaicML in #168
New Contributors
- @rishab-partha made their first contribution in #158
Full Changelog: v0.1.1...v0.1.2
v0.1.1
Minor bug fix related to max_pad_tokens
at generate time. Other Noise Schedule related features and options.
What's Changed
- Optional quasirandom timesteps, zero terminal SNR, cosine schedule for SD models by @coryMosaicML in #138
- Add HF hub dependency by @coryMosaicML in #142
- Add link to CommonCanvas model weights by @Skylion007 in #143
- Fix autoencoder load by @RR4787 in #141
- Add option to use karras sigmas for SDXL style models by @coryMosaicML in #146
- Fix bug in stable diffusion when mask_pad_tokens is false by @coryMosaicML in #147
- Only use a text encoder mask in SD model forward if mask_pad_tokens is false by @coryMosaicML in #149
Full Changelog: v0.1.0...v0.1.1
v0.1.0
What's Changed
- Allow masking padding tokens in cross attention layers by @jazcollins in #94
- Fix typo in pyproject.toml by @eltociear in #92
- Autoencoder implementation and training by @coryMosaicML in #79
- Hotfix missing lpips requirement by @Skylion007 in #98
- Fixes for doing inference with masked padding by @coryMosaicML in #99
- Add script for running gradio demo from a local checkpoint by @coryMosaicML in #100
- Better StreamingDataset defaults while preserving old shuffle settings by @snarayan21 in #95
- Remove rounding in aspect ratio bucketing transform by @Landanjs in #111
- Add sample SDXL yamls and update README by @jazcollins in #112
- LogDiffusionImages Features and Refactors by @Landanjs in #104
- Bump gradio demo version by @coryMosaicML in #114
- Make custom autoencoders work with SD2 and SDXL models. by @coryMosaicML in #102
- Add algorithm to control randomness over different eval times by @coryMosaicML in #115
- Add ruff linter by @Skylion007 in #122
- Add image only dataset + script to add captions generated by LLaVA to a streaming dataset by @coryMosaicML in #118
- Code cleanup by @coryMosaicML in #120
- Update to latest transformers, diffusers, and other packages. by @coryMosaicML in #125
- Landan/text encoder refactor by @Landanjs in #124
- Add option to specify image output key in image dataloader factory by @coryMosaicML in #129
- Add explicit per block fsdp wrapping for SDXL by @coryMosaicML in #127
- make local paths optional by @A-Jacobson in #128
- fixed masked padding bug. by @A-Jacobson in #130
- arbitrary aspect ratio buckets by @Landanjs in #126
- Add option to set per-channel mean, std. dev. of the autoencoder latents when training the UNet by @coryMosaicML in #132
- Test PR by @Landanjs in #134
- Arbitrary aspect ratio bucket boundaries by @Landanjs in #133
- Bug to enable fp16 by @RR4787 in #136
- Only download CLIP on rank 0 when doing eval by @coryMosaicML in #135
New Contributors
- @eltociear made their first contribution in #92
- @snarayan21 made their first contribution in #95
- @RR4787 made their first contribution in #136
Full Changelog: v0.0.1...v0.1.0
v0.0.1
Add callback to catch NaNs in the train loss (#97)