Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4090 run flux load_lora_weights error? #152

Open
lonngxiang opened this issue Sep 5, 2024 · 2 comments
Open

4090 run flux load_lora_weights error? #152

lonngxiang opened this issue Sep 5, 2024 · 2 comments

Comments

@lonngxiang
Copy link

use python run.py config/whatever_you_want.yml works,but use The following code loads lora weight error,be killed
image


from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("/ai/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipeline.enable_model_cpu_offload()

pipeline.load_lora_weights('/ai/ai-toolkit/output/my_first_flux_lora_v1', weight_name='my_first_flux_lora_v1_000001000.safetensors')
image = pipeline('a Yarn art style tarot card').images[0]

@jlonge4
Copy link

jlonge4 commented Sep 6, 2024

@lonngxiang I think even with CPU offloading 24GB VRAM wouldn't be enough to get you there for inference without CUDA OOM.

16.5GB total - 4.5 for text encoder = 12 * 2bits = 24. OOM happens at around 95% (22.8GB) I think.

Just a guess here.

@wotulong
Copy link

maybe you can try smaller weight and height param (such as 512) in pipline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants