Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tiled Sampler for FLUX #4955

Closed
VladimirNCh opened this issue Sep 17, 2024 · 6 comments
Closed

Tiled Sampler for FLUX #4955

VladimirNCh opened this issue Sep 17, 2024 · 6 comments
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.

Comments

@VladimirNCh
Copy link

Your question

Is there a node for FLUX that supports TiledSampler? The ones I've tried don't work with UNet models, and this could help generate larger images on weak graphics cards.

I have a Quadro K620 with 2GB of video memory and 16GB RAM and it is not possible to make changes to the configuration of this computer.

Model: Flux1-schnell. The maximum resolution I manage to generate is 768x1024 or 1024x768. Although the generation is very slow (20 minutes for 5 steps), but it is enough for my needs
img_00024_

Logs

No response

Other

No response

@VladimirNCh VladimirNCh added the User Support A user needs help with something, probably not a bug. label Sep 17, 2024
@Poukpalaova
Copy link

Your question

Is there a node for FLUX that supports TiledSampler? The ones I've tried don't work with UNet models, and this could help generate larger images on weak graphics cards.

I have a Quadro K620 with 2GB of video memory and 16GB RAM and it is not possible to make changes to the configuration of this computer.

Model: Flux1-schnell. The maximum resolution I manage to generate is 768x1024 or 1024x768. Although the generation is very slow (20 minutes for 5 steps), but it is enough for my needs img_00024_

Logs

No response

Other

No response

use those node
image

@camoody1
Copy link

@Poukpalaova The OP is asking to help on weaker GPUs. I don't think suggesting a 6GB ControlNet model is really the help that he needs.

@Adreitz
Copy link

Adreitz commented Sep 21, 2024

Flux doesn't really need a controlnet to do tiled upscaling. You can get very coherent results just using Ultimate SD Upscale node and a low denoise strength of about 0.20. If a particular prompt and seed has visible seams or hallucinations, you can instead use this tiled diffusion node that supports multiple different algorithms: https://github.com/shiimizu/ComfyUI-TiledDiffusion. For me, Mixture of Diffusers with a tile overlap of 64 seems to work pretty well. It is slower than Ultimate SD Upscale, though.

@VladimirNCh
Copy link
Author

https://github.com/shiimizu/ComfyUI-TiledDiffusion. For me, Mixture of Diffusers with a tile overlap of 64 seems to work pretty well. It is slower than Ultimate SD Upscale, though.

Thank you. Yes, it allowed to generate images larger than 1024x1024, but they appear with artifacts in the form of repeats of certain areas. Maybe it's in the settings, I will test further.

@Adreitz
Copy link

Adreitz commented Sep 25, 2024

Are you taking your initial image, scaling it, doing a VAE Encode, and feeding it back in as your latent image? You need to start with your initial image or the final output will contain only repeats and won't look much like your initial image. The low denoise strength on the tiled stage is also important, so that Flux takes most of its guidance from the initial image.

Note that the VAE Encode and VAE Decode steps are quite memory heavy, so I recommend using VAE Encode (Tiled) and VAE Decode (Tiled) on the second stage with a tile size of 1024 so you don't overwhelm your VRAM. These nodes are now part of core, though they are still in testing. The Tiled Diffusion custom node I linked above also contains its own tiled encode/decode nodes, but for me they seemed to use more memory.

Load this image into comfy if you want to see my workflow, though it is not meant for low VRAM situations. Note that my testing showed that the output quality is affected by the method of upscaling of the initial image. I was most happy with 4x UltraSharp as the upscaling model. This workflow is also a little more complicated than it needs to be so that I can use custom ODE solvers using either https://github.com/redhottensors/ComfyUI-ODE or https://github.com/memmaptensor/ComfyUI-RK-Sampler (the latter is slower for me).
ComfyUI_00025_

Copy link

This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.

@github-actions github-actions bot added the Stale This issue is stale and will be autoclosed soon. label Oct 26, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

4 participants