Comfyui crash UserWarning: 1Torch was not compiled with flash attention. #4926
Labels
duplicate
This issue or pull request already exists
Potential Bug
User is reporting a bug. This should be tested.
Expected Behavior
Hello!
I have two problems! the first one doesn't seem to be so straightforward, because the program runs anyway, the second one always causes the program to crash when using the file: "flux1-dev-fp8.safetensors." I have attached the log file, and hope you can help me.
Actual Behavior
ComfyUI-Manager: installing dependencies done.
[2024-09-14 19:20] ** ComfyUI startup time: 2024-09-14 19:20:34.652076
[2024-09-14 19:20] ** Platform: Windows
[2024-09-14 19:20] ** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
[2024-09-14 19:20] ** Python executable: D:\ConfyUI\ComfyUI_windows_portable\python_embeded\python.exe
[2024-09-14 19:20] ** ComfyUI Path: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI
[2024-09-14 19:20] ** Log path: D:\ConfyUI\ComfyUI_windows_portable\comfyui.log
[2024-09-14 19:20]
Prestartup times for custom nodes:
[2024-09-14 19:20] 0.0 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
[2024-09-14 19:20] 1.1 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-09-14 19:20]
Total VRAM 12288 MB, total RAM 32561 MB
[2024-09-14 19:20] pytorch version: 2.4.1+cu124
[2024-09-14 19:20] Set vram state to: NORMAL_VRAM
[2024-09-14 19:20] Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
[2024-09-14 19:20] Using pytorch cross attention
[2024-09-14 19:20] [Prompt Server] web root: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\web
[2024-09-14 19:20] D:\ConfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning:
torch.cuda.amp.custom_fwd(args...)
is deprecated. Please usetorch.amp.custom_fwd(args..., device_type='cuda')
instead.@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
[2024-09-14 19:20] ### Loading: ComfyUI-Manager (V2.50.3)
[2024-09-14 19:20] ### ComfyUI Revision: 2693 [ca08597] | Released on '2024-09-14'
[2024-09-14 19:20] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2024-09-14 19:20] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-09-14 19:20] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-09-14 19:20] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-09-14 19:20] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2024-09-14 19:20]
[2024-09-14 19:20] �[92m[rgthree] Loaded 42 exciting nodes.�[00m
[2024-09-14 19:20] �[33m[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.�[00m
[2024-09-14 19:20]
[2024-09-14 19:20]
Import times for custom nodes:
[2024-09-14 19:20] 0.0 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
[2024-09-14 19:20] 0.0 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SDXL-EmptyLatentImage
[2024-09-14 19:20] 0.2 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
[2024-09-14 19:20] 0.3 seconds: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-09-14 19:20]
[2024-09-14 19:20] Starting server
[2024-09-14 19:20] To see the GUI go to: http://127.0.0.1:8188
[2024-09-14 19:20] FETCH DATA from: D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[2024-09-14 19:21] got prompt
[2024-09-14 19:21] Using pytorch attention in VAE
[2024-09-14 19:21] Using pytorch attention in VAE
[2024-09-14 19:21] D:\ConfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning:
clean_up_tokenization_spaces
was not set. It will be set toTrue
by default. This behavior will be depracted in transformers v4.45, and will be then set toFalse
by default. For more details check this issue: huggingface/transformers#31884warnings.warn(
[2024-09-14 19:21] clip missing: ['text_projection.weight']
[2024-09-14 19:21] Requested to load FluxClipModel_
[2024-09-14 19:21] Loading 1 new model
[2024-09-14 19:21] loaded completely 0.0 4777.53759765625 True
[2024-09-14 19:21] D:\ConfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Steps to Reproduce
when I get the file "flux1-dev-fp8.safetensors" it always crashes. with the fast version of it it works problem
Debug Logs
Other
No response
The text was updated successfully, but these errors were encountered: