Replies: 1 comment
-
From what I understand:
One other issue that was easy to fix was: spaces and dots in the loRA name would brick the exported engine. The fix was to modify a line or several, not even something complex, 5 minutes of work tops for someone that doesn't understand code. It was unfixed from the first version to the second one.
Or it's likely NVIDIA dropped this project fully. 7 months without any update is a REALLY long time. Might be they are waiting for some breakthrough within torch or some other place. For now it's safe to say the project is dead. |
Beta Was this translation helpful? Give feedback.
-
Hey, I'm really confused about why this isn't a top priority for Nvidia. It's been a year, and it only works with automatic1111 webui and not consistently.
When it does work, it's incredible! Imagine generating 1024x1024 SDXL images in just 2.3 seconds at 80 steps. It's mind-blowing. So, what's the deal, Nvidia? Why aren't the developers of ComfyUI, Forge, and Fooocus supporting this? It should be a top priority; it's such a waste that it's not. This could be a game-changer for Flux, and it would save so much energy since generation times would be reduced by seven times.
Seriously, can someone explain why this hasn't become the standard and why it's not supported anywhere else? Nvidia RTX graphics cards are the best for inference and image generation, but they're not even close to reaching their full potential. Do people just not know about this? If that's the case, we can educate them, but what's Nvidia's excuse for not supporting this more?
Beta Was this translation helpful? Give feedback.
All reactions