-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tiled Sampler for FLUX #4955
Comments
@Poukpalaova The OP is asking to help on weaker GPUs. I don't think suggesting a 6GB ControlNet model is really the help that he needs. |
Flux doesn't really need a controlnet to do tiled upscaling. You can get very coherent results just using Ultimate SD Upscale node and a low denoise strength of about 0.20. If a particular prompt and seed has visible seams or hallucinations, you can instead use this tiled diffusion node that supports multiple different algorithms: https://github.com/shiimizu/ComfyUI-TiledDiffusion. For me, Mixture of Diffusers with a tile overlap of 64 seems to work pretty well. It is slower than Ultimate SD Upscale, though. |
Thank you. Yes, it allowed to generate images larger than 1024x1024, but they appear with artifacts in the form of repeats of certain areas. Maybe it's in the settings, I will test further. |
Are you taking your initial image, scaling it, doing a VAE Encode, and feeding it back in as your latent image? You need to start with your initial image or the final output will contain only repeats and won't look much like your initial image. The low denoise strength on the tiled stage is also important, so that Flux takes most of its guidance from the initial image. Note that the VAE Encode and VAE Decode steps are quite memory heavy, so I recommend using VAE Encode (Tiled) and VAE Decode (Tiled) on the second stage with a tile size of 1024 so you don't overwhelm your VRAM. These nodes are now part of core, though they are still in testing. The Tiled Diffusion custom node I linked above also contains its own tiled encode/decode nodes, but for me they seemed to use more memory. Load this image into comfy if you want to see my workflow, though it is not meant for low VRAM situations. Note that my testing showed that the output quality is affected by the method of upscaling of the initial image. I was most happy with 4x UltraSharp as the upscaling model. This workflow is also a little more complicated than it needs to be so that I can use custom ODE solvers using either https://github.com/redhottensors/ComfyUI-ODE or https://github.com/memmaptensor/ComfyUI-RK-Sampler (the latter is slower for me). |
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically. |
Your question
Is there a node for FLUX that supports TiledSampler? The ones I've tried don't work with UNet models, and this could help generate larger images on weak graphics cards.
I have a Quadro K620 with 2GB of video memory and 16GB RAM and it is not possible to make changes to the configuration of this computer.
Model: Flux1-schnell. The maximum resolution I manage to generate is 768x1024 or 1024x768. Although the generation is very slow (20 minutes for 5 steps), but it is enough for my needs
Logs
No response
Other
No response
The text was updated successfully, but these errors were encountered: