You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The SUPIR_conditioner node should execute without errors and correctly process input data.
The workflow should run smoothly without encountering a NoneType object is not callable error.
The process should complete successfully without running out of GPU memory.
All input data should be correctly processed and passed to the subsequent nodes.
The execution should not fail due to memory allocation issues, and PyTorch should manage GPU memory efficiently.
Actual Behavior
When executing the SUPIR_conditioner node, a TypeError: 'NoneType' object is not callable error occurs.
The issue happens in nodes_v2.py at line 637.
The system also encounters HIP out of memory errors when running KSampler, stating that it tried to allocate 6.46 GiB, but only 3.35 GiB was free.
PyTorch has already allocated 16.97 GiB, with 127.91 MiB reserved but unallocated.
Even setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True does not resolve the memory fragmentation issue.
Steps to Reproduce
Run ComfyUI on Windows using the portable version.
Load a workflow that includes the SUPIR_conditioner node.
Click Queue Prompt to start processing.
The following errors appear:
TypeError: 'NoneType' object is not callable for the SUPIR_conditioner node.
HIP out of memory error when running KSampler, indicating insufficient VRAM.
The process fails to complete due to these errors.
Debug Logs
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Loading weights to: cpu
Diffusion using fp16
FETCH ComfyRegistry Data: 10/32
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1,4,32,32) =4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Attempting to load SDXL model from node inputs
Requested to load SDXL
loaded completely 5680.84897.0483474731445 True
!!! Exception during processing !!! Failed to load SDXL model
Traceback (most recent call last):
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\nodes_v2.py", line 906,inprocessif is_accelerate_available:
^^^^^^^^^^^^^^^^^^^^^^^
NameError: name 'is_accelerate_available' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 327,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 202,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 174,in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 163,in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\nodes_v2.py", line 918,inprocess
raise Exception("Failed to load SDXL model")
Exception: Failed to load SDXL model
Prompt executed in5.99 seconds
FETCH ComfyRegistry Data: 15/32
got prompt
Loading weights to: cpu
Diffusion using fp16
Encoder using bf16
[TiledVAE]: input_size: torch.Size([1,3,4224,2976]), tile_size: 512, padding: 32
[TiledVAE]: split to 9x6 =54 tiles. Optimal tile size 512x480, original tile size 512x512
[TiledVAE]: Executing Encoder Task Queue: 11%|███▉ |550/4914 [00:06<00:36,120.40it/s]FETCH ComfyRegistry Data: 20/32
[TiledVAE]: Executing Encoder Task Queue: 35%|███████████▉ |1717/4914 [00:14<00:15,200.82it/s]FETCH ComfyRegistry Data: 25/32
[TiledVAE]: Executing Encoder Task Queue: 100%|██████████████████████████████████|4914/4914 [00:20<00:00,239.61it/s]
[TiledVAE]: Done in20.894s, max VRAM alloc 751.366 MB
[TiledVAE]: input_size: torch.Size([1,4,528,372]), tile_size: 64, padding: 11
[TiledVAE]: split to 8x6 =48 tiles. Optimal tile size 64x64, original tile size 64x64
[TiledVAE]: Executing Decoder Task Queue: 33%|███████████▎ |1959/5904 [00:02<00:05,788.40it/s]FETCH ComfyRegistry Data: 30/32
[TiledVAE]: Executing Decoder Task Queue: 50%|█████████████████▏ |2981/5904 [00:07<00:13,210.77it/s]FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
[TiledVAE]: Executing Decoder Task Queue: 53%|██████████████████ |3142/5904 [00:07<00:13,207.28it/s] [DONE]
[TiledVAE]: Executing Decoder Task Queue: 54%|██████████████████▏ |3166/5904 [00:07<00:12,213.08it/s][ComfyUI-Manager] All startup tasks have been completed.
[TiledVAE]: Executing Decoder Task Queue: 100%|██████████████████████████████████|5904/5904 [00:47<00:00,124.39it/s]
[TiledVAE]: Done in47.869s, max VRAM alloc 1846.727 MB
captions: [['']]
Batch captioning
!!! Exception during processing !!!'NoneType' object is not callable
Traceback (most recent call last):
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 327,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 202,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 174,in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 163,in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\nodes_v2.py", line 637,in condition
_c, _uc = SUPIR_model.conditioner.get_unconditional_conditioning(cond, uncond)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\sgm\modules\encoders\modules.py", line 190,in get_unconditional_conditioning
c = self(batch_c)
^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747,in _call_impl
return forward_call(*args,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\sgm\modules\encoders\modules.py", line 211,in forward
emb_out = embedder(batch[embedder.input_key])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747,in _call_impl
return forward_call(*args,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\cluudp’\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-supir\sgm\modules\encoders\modules.py", line 493,in forward
batch_encoding = self.tokenizer(
^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
Prompt executed in70.17 seconds
Other
No response
The text was updated successfully, but these errors were encountered:
Expected Behavior
The SUPIR_conditioner node should execute without errors and correctly process input data.
The workflow should run smoothly without encountering a NoneType object is not callable error.
The process should complete successfully without running out of GPU memory.
All input data should be correctly processed and passed to the subsequent nodes.
The execution should not fail due to memory allocation issues, and PyTorch should manage GPU memory efficiently.
Actual Behavior
When executing the SUPIR_conditioner node, a TypeError: 'NoneType' object is not callable error occurs.
The issue happens in nodes_v2.py at line 637.
The system also encounters HIP out of memory errors when running KSampler, stating that it tried to allocate 6.46 GiB, but only 3.35 GiB was free.
PyTorch has already allocated 16.97 GiB, with 127.91 MiB reserved but unallocated.
Even setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True does not resolve the memory fragmentation issue.
Steps to Reproduce
Run ComfyUI on Windows using the portable version.
Load a workflow that includes the SUPIR_conditioner node.
Click Queue Prompt to start processing.
The following errors appear:
TypeError: 'NoneType' object is not callable for the SUPIR_conditioner node.
HIP out of memory error when running KSampler, indicating insufficient VRAM.
The process fails to complete due to these errors.
Debug Logs
Other
No response
The text was updated successfully, but these errors were encountered: