Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Press a key to continue . . . #4902

Open
ZeroCool22 opened this issue Sep 12, 2024 · 9 comments
Open

Press a key to continue . . . #4902

ZeroCool22 opened this issue Sep 12, 2024 · 9 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@ZeroCool22
Copy link

Expected Behavior

Create image without error.

Actual Behavior

Not showing any errors, just Press a key to continue . . .

Steps to Reproduce

Workflow.

Screenshot_3

Debug Logs

C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-09-12 19:20:32.868037
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\python_embeded\python.exe
** ComfyUI Path: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI
** Log path: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\rgthree-comfy
   1.5 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 16376 MB, total RAM 32680 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\web
### Loading: ComfyUI-Impact-Pack (V7.5)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.6)
[Impact Pack] Wildcards loading done.
Total VRAM 16376 MB, total RAM 32680 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2683 [b962db99] | Released on '2024-09-12'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
------------------------------------------
Comfyroll Studio v1.76 :  175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------

[rgthree] Loaded 42 epic nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.

WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 218 nodes successfully.

        "Don't wait. The time will never be just right." - Napoleon Hill


Import times for custom nodes:
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\cg-use-everywhere
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-mxToolkit
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\comfyui_segment_anything
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\comfy-image-saver
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Chibi-Nodes
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-GGUF
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\efficiency-nodes-comfyui
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\rgthree-comfy
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_essentials
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
   0.1 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-KJNodes
   0.4 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager
   0.5 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-SAM2
   1.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
   1.6 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\was-node-suite-comfyui

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
got prompt
got prompt
got prompt
C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:212.)
  torch_tensor = torch.from_numpy(tensor.data) # mmap

ggml_sd_loader:
 0                             466
 8                             304
 1                              10
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
Requested to load FluxClipModel_
Loading 1 new model
C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load Flux
Loading 1 new model
loaded completely 0.0 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:28<00:00,  2.88s/it]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
Prompt executed in 124.04 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 26.19 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.48s/it]
Prompt executed in 25.86 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 25.85 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.49s/it]
Prompt executed in 25.96 seconds
got prompt
got prompt
got prompt
got prompt
got prompt
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:25<00:00,  2.50s/it]
Prompt executed in 26.78 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 25.83 seconds
loaded completely 13323.958600265503 12125.320556640625 True
 30%|████████████████████████▉                                                          | 3/10 [00:10<00:23,  3.35s/it]
Processing interrupted
Prompt executed in 10.41 seconds
got prompt
got prompt
got prompt
got prompt
got prompt

C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy>pause
Presione una tecla para continuar . . .

Other

No response

@ZeroCool22 ZeroCool22 added the Potential Bug User is reporting a bug. This should be tested. label Sep 12, 2024
@ZeroCool22
Copy link
Author

Again...

Screenshot_5

@bulutharbeli
Copy link

I have the same problem since last week. I have tried to use several different workflows but it keeps disconnect all the time.

@ZeroCool22
Copy link
Author

I have the same problem since last week. I have tried to use several different workflows but it keeps disconnect all the time.

Ok, so we need to find the commit that don't have this problem and do a checkout...

@yingxiongyanjin
Copy link

again...

@mcmonkey4eva
Copy link
Contributor

check task manager / system resource usage. That type of sudden hard crash usually indicates you ran of out system RAM

@bramvera
Copy link

bramvera commented Sep 17, 2024

I have the same problem just now
nope, system RAM is only 43%
image

EDIT: added video with hardware monitoring
https://github.com/user-attachments/assets/a132b7f0-14de-450e-a929-f3101704a5a9

check task manager / system resource usage. That type of sudden hard crash usually indicates you ran of out system RAM

@humanm1372
Copy link

humanm1372 commented Sep 17, 2024

I have the same problem on flux dev and schnel on both fp8 and 16
I checked the task manager for running out of resources but its not the issue. I also used workflow examples.
my setup i7 10700K- RTX 3060 12GB, 32 GB 3600 MHz, Win 10 . I guess I can use the dev on fp16. cant I?
somebody has a solution? i'll be greatfull

task1

@bramvera
Copy link

ok found a solution, at least for me fixed this issue:

  • Run: sysdm.cpl
  • Windows System Properties box, ensure that you are on the Advanced tab.
  • Click the Settings button from under the Performance section.
  • On the Performance Options box, go to the Advanced tab.
  • Click the Change button from under the Virtual memory section.
  • Click Change, uncheck Automatically manage paging file size for all drive:
  • on your system drive, for is C: change to No paging file, click Set, reboot

after rebooted set back again to System managed size, click Set, reboot one more time
image

I'm no longer have the ComfyUI sudden crash issue

@ltdrdata
Copy link
Collaborator

I have the same problem on flux dev and schnel on both fp8 and 16 I checked the task manager for running out of resources but its not the issue. I also used workflow examples. my setup i7 10700K- RTX 3060 12GB, 32 GB 3600 MHz, Win 10 . I guess I can use the dev on fp16. cant I? somebody has a solution? i'll be greatfull

task1

Try uninstalling pytorch and changing its version.
There have been reports of such issues occurring due to pytorch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

7 participants