Replies: 1 comment 1 reply
-
you should move this PR to #1591 (comment) -> https://github.com/Juqowel/GPU_For_T5 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Regarding the Flux models, in ComfyUI, there is the feature to set/force loading text-encoders to the CPU rather the GPU, which helps with Vram management a lot. It does impact speed a bit, but it's a about a few seconds more, but the benefit is that I can use higher precision models and save time by unloading the text-encoders from Vram each time there is a change in the model or LoRA.
Beta Was this translation helpful? Give feedback.
All reactions