Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Omnigen saturate RAM and VRAM completely and also is extremely slow! #664

Open
51M0N3-3XC opened this issue Nov 16, 2024 · 1 comment
Open

Comments

@51M0N3-3XC
Copy link

51M0N3-3XC commented Nov 16, 2024

Omnigen saturate RAM and VRAM completely and also is extremely slow!

in console I see this warning:
C:\pinokio\api\omnigen.git\app\env\lib\site-packages\diffusers\models\attention_processor.py:2358: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)

System is: Windows 11, Ryzen 5 8500G, 32GB RAM, 3060 12GB

I did some research but I haven't found a solution, yet

@51M0N3-3XC 51M0N3-3XC changed the title Omnigen saturate RAM and VRAM completely Omnigen saturate RAM and VRAM completely and also is extremely slow! Nov 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@51M0N3-3XC and others