Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Won't run out of the box #1

Open
joetm opened this issue Jul 4, 2022 · 5 comments
Open

Won't run out of the box #1

joetm opened this issue Jul 4, 2022 · 5 comments

Comments

@joetm
Copy link

joetm commented Jul 4, 2022

Missing logs/f8-kl-clip-encoder-256x256-run1/configs/2022-06-01T22-11-40-project.yaml

@benedlore
Copy link

What exactly is the difference between this and the Latent Diffusion here (https://github.com/CompVis/latent-diffusion) with the same readme?

@JonnoFTW
Copy link

JonnoFTW commented Jul 27, 2022

I also had this issue, after downgrading torchmetrics to 0.6.0 (see NVIDIA/DeepLearningExamples#1113) and applying the patch from #4

I get an ImportError:

ImportError: cannot import name 'CLIPTokenizer' from 'transformers' (unknown location)

Edit:

Upgrading transformers to 4.20.1 fixed the issue, but then there's an issue with openssl. I copied pycrypto.so and libssl.so.3 from another conda env I had but this is a bandaid fix.

@fjenett
Copy link

fjenett commented Jul 30, 2022

You can easily fix this by adding the large text2img to the params:

python scripts/txt2img.py \
    --prompt "a virus monster is playing guitar, oil on canvas" \
    --config configs/latent-diffusion/txt2img-1p4B-eval.yaml \
    --ckpt models/ldm/text2img-large/model.ckpt \
    --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0  --ddim_steps 50

@neonsecret
Copy link

You can easily fix this by adding the large text2img to the params:

python scripts/txt2img.py \
    --prompt "a virus monster is playing guitar, oil on canvas" \
    --config configs/latent-diffusion/txt2img-1p4B-eval.yaml \
    --ckpt models/ldm/text2img-large/model.ckpt \
    --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0  --ddim_steps 50

same issue from the box, solution worked for me

@faraday
Copy link

faraday commented Aug 25, 2022

You can easily fix this by adding the large text2img to the params:

python scripts/txt2img.py \
    --prompt "a virus monster is playing guitar, oil on canvas" \
    --config configs/latent-diffusion/txt2img-1p4B-eval.yaml \
    --ckpt models/ldm/text2img-large/model.ckpt \
    --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0  --ddim_steps 50

This doesn't return same results though.
You'd immediately notice the quality has decreased.
I don't think it's the same model here.

CapsAdmin pushed a commit to CapsAdmin/stable-diffusion that referenced this issue Sep 18, 2022
update README.md to add addtitional steps for AMD cards
mnixry pushed a commit to mnixry/stable-diffusion-novelai that referenced this issue Oct 7, 2022
blefaudeux pushed a commit to blefaudeux/stable-diffusion that referenced this issue Oct 10, 2022
* update reqs
* add image variations
* update readme
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants