Replies: 1 comment
-
For the path to the vae, you should give the path to the directory containing it, not the full file. If your vae is not in .bin format, you'll probably need to extract/convert it, maybe one of the scripts here will help? https://github.com/ShivamShrirao/diffusers/tree/main/scripts Sample size determines how big your input images are/will be. A larger sample size will yield higher-resolution outputs and more detail at the cost of VRAM. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been trying to use different VAE models with the anime/NAI model.
Have been trying to find out how I can get better distant object clarity but whatever I input the path it gives an error 2 no path, it shows like stable__diffusion__model__dreambooth__, etc.,
attempting to use it without using token and that type of interface cause the Animefull-latest like such aren't on huggingface except AnythingV3.0,
but that one is hard to train on because it's a more resolved image compare to Animefull-latest.
Moreover, how do I convert something like “file.vae.pt” so it works with dreambooth,
I attempted to just simple extension change to .bin.
But gave an error, so is the VAE for use in normal image generation different compare
to the “diffusion_pytorch_model.bin” you use for model training?
Furthermore, what does sample size do? I saw NAI VAE model use 512 instead of 256? Does that setting have a general effect on training?
Beta Was this translation helpful? Give feedback.
All reactions