Skip to content

giganttheo/distill-ccld

Folders and files

NameName
Last commit message
Last commit date

Latest commit

2344c30 · Dec 5, 2022

History

21 Commits
Dec 5, 2022
Apr 19, 2022
Apr 19, 2022
Apr 19, 2022
Apr 18, 2022
Jun 8, 2022
Jun 8, 2022
Jun 8, 2022
Jun 8, 2022
Jun 8, 2022

Repository files navigation

Distill CLOOB-Conditioned Latent Diffusion trained on WikiArt

As part of the HugGAN community event, I trained a 105M-parameters latent diffusion model using a knowledge distillation process.

Open In Colab

drawing

Prompt : "A snowy landscape, oil on canvas"

Links

How to use

You need some dependencies from multiple repositories linked in this repository : CLOOB latent diffusion :

  • CLIP
  • CLOOB : the model to encode images and texts in an unified latent space, used for conditioning the latent diffusion.
  • Latent Diffusion : latent diffusion model definition
  • Taming transformers : a pretrained convolutional VQGAN is used as an autoencoder to go from image space to the latent space in which the diffusion is done.
  • v-diffusion : contains some functions for sampling using a diffusion model with text and/or image prompts.

An example code to use the model to sample images from a text prompt can be seen in a Colab Notebook, or directly in the app source code for the Gradio demo on this Space

Demo images

drawing

Prompt : "A martian landscape painting, oil on canvas"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published