Skip to content

[gradio] cuda not release gpu memory? #4

Open
@t00350320

Description

@t00350320

diffusers 0.28.0.dev0

first time runs well, the second runtime changed the content scale to 2 , CUDA out of memory errors.

  1. guess the content scale will be transfered to float ?
  2. gradio will not release gpu after the first error
    :
  File "/home/notebook/code/personal/CSGO/gradio/app.py", line 172, in create_image
    images = csgo.generate(pil_content_image=content_image, pil_style_image=style_image,
  File "/home/notebook/code/personal/CSGO/./ip_adapter/ip_adapter.py", line 735, in generate
    images = self.pipe(
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/notebook/code/group/diffusers/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py", line 1235, in __call__
    self.check_inputs(
  File "/home/notebook/code/group/diffusers/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py", line 753, in check_inputs
    raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
TypeError: For single controlnet: `controlnet_conditioning_scale` must be type `float`.
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 31.75 GiB total capacity; 28.56 GiB already allocated; 8.75 MiB free; 29.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

PTAL

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions