You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Omnigen saturate RAM and VRAM completely and also is extremely slow!
in console I see this warning:
C:\pinokio\api\omnigen.git\app\env\lib\site-packages\diffusers\models\attention_processor.py:2358: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
System is: Windows 11, Ryzen 5 8500G, 32GB RAM, 3060 12GB
I did some research but I haven't found a solution, yet
The text was updated successfully, but these errors were encountered:
51M0N3-3XC
changed the title
Omnigen saturate RAM and VRAM completely
Omnigen saturate RAM and VRAM completely and also is extremely slow!
Nov 16, 2024
Omnigen saturate RAM and VRAM completely and also is extremely slow!
in console I see this warning:
C:\pinokio\api\omnigen.git\app\env\lib\site-packages\diffusers\models\attention_processor.py:2358: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
System is: Windows 11, Ryzen 5 8500G, 32GB RAM, 3060 12GB
I did some research but I haven't found a solution, yet
The text was updated successfully, but these errors were encountered: