-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[My workflow includes Facereactor and total on a good day is 150 seconds per image generations. and now it is just 50 seconds on average just because of installing Flash Attention WITHOUT making any special changes to my setup???? I DO NOT GET IT I DON"T UNDERSTAND!?!?!?!?!] I had a hard time using ComfyUI on Windows 11 previously... so this is what I changed... #4923
Comments
okay. so first issue that i encounter is the following:
i have no idea what i need to do. can i simply reinstall based on the suggested steps? i am unsure. |
from the rest of the startup process... i can't actually tell if there's any difference... i just wish i can figure out how to get rid of all the warnings... they are causing a lot of anxiety.
|
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically. |
Okay i am in the process of installing flash attention on my windows 11 pc
if you can see, currently without flash attention and using xformers, my image generations are about 150 seconds or so... |
the other window is still installing flash-attn as i type this. let's see if it makes any difference once it is installed... i am assuming once it is installed i don't have to do anything... and just run comfy as usual. and it completed the installation. and it states that it has:
|
|
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct from MS's hugging face it mentioned briefly. flash_attn==2.5.8 <<--- it seems this is the only mention of flash_attn mentioned in any of the documentations...--->> below i have updated to show my current flash-attn built from source after installing via pip.
|
this is the only other person who indicated a possibility on how the flash_attn is being used. |
2 things i gather:
|
let's take a look:
|
|
|
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one i feel dumb. huggingface have descriptions on support for flash_attn.
|
i guess the below kinda answers my question.
|
additionally... |
okay so someone has made this job easier for me. so this part answers the question how does flash_attn gets loaded into comfyUI when i did not make any changes to my workflow. so apparently they have updated in pytorch. installing updated pytorch is then key. https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/ -->> Attention Is All You Need <<-- |
https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/native/transformers/cuda/flash_attn so this part will call flash_attn. pytorch/pytorch@d41558f#diff-d1bfb425f2c653ff16c5f553eec51cae8be05c881259ec72f84fbd7d929f92b0R70-R80
ok ok... i'm just surprised that this is not a common knowledge kind of thing... judging from not seeing much information about enabling flash_attention when using ComfyUI. or am I the one who is late to this game? |
Dao-AILab/flash-attention#1308 let's see if updating windows and updating drivers cause any issues... first... sudo apt update && sudo apt upgrade broke a lot of things... |
|
let's delete everything and start all over again to confirm that yes, there is a speed difference before installing flash_attn... also... i needed conda to help install a software that is obscure... let's do this. since there is no space for me to setup a conda environment... i think the next best step for me is to simply delete everything and start from scratch. okay. so we are deleting the original virtual environment where i have installed ComfyUI.
comf ddd insightface miniconda3
Sat Nov 2 17:13:21 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01 Driver Version: 565.90 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4080 ... On | 00000000:0B:00.0 On | N/A |
| 0% 42C P5 31W / 320W | 566MiB / 16376MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
comf ddd insightface miniconda3
-bash: cd: ven/comf/: No such file or directory
comf ddd insightface miniconda3
on bash 0ms
╭─ 17:13:23 | 2 Nov, Saturday | in
╰─❯ |
first, uninstall everything and start from fresh. winget install wsl
Multiple packages found matching input criteria. Please refine the input.
Name Id Source
-------------------------------------------------
Arch WSL 9MZNMNKSM73X msstore
Windows Subsystem for Linux Microsoft.WSL winget
winget install Microsoft.WSL
Found Windows Subsystem for Linux [Microsoft.WSL] Version 2.1.5.0
This application is licensed to you by its owner.
Microsoft is not responsible for, nor does it grant any licenses to, third-party packages.
This package requires the following dependencies:
- Windows Features
Microsoft-Windows-Subsystem-Linux
VirtualMachinePlatform
Enabling [Microsoft-Windows-Subsystem-Linux]...
Enabling [VirtualMachinePlatform]...
Successfully enabled Windows Features dependencies
Downloading https://github.com/microsoft/WSL/releases/download/2.1.5/wsl.2.1.5.0.x64.msi
██████████ 43.0 MB / 127 MB |
i reset everything and reinstalled from scratch. and to save space... i am moving everything i can elsewhere using system settings in windows 11. PowerShell 7.5.0-preview.5
Loading personal and system profiles took 774ms.
wsl --list --all
Windows Subsystem for Linux has no installed distributions.
Use 'wsl.exe --list --online' to list available distributions
and 'wsl.exe --install <Distro>' to install.
Distributions can also be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Error code: Wsl/WSL_E_DEFAULT_DISTRO_NOT_FOUND
wsl --list --online
The following is a list of valid distributions that can be installed.
Install using 'wsl.exe --install <Distro>'.
NAME FRIENDLY NAME
Ubuntu Ubuntu
Debian Debian GNU/Linux
kali-linux Kali Linux Rolling
Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS
Ubuntu-22.04 Ubuntu 22.04 LTS
Ubuntu-24.04 Ubuntu 24.04 LTS
OracleLinux_7_9 Oracle Linux 7.9
OracleLinux_8_7 Oracle Linux 8.7
OracleLinux_9_1 Oracle Linux 9.1
openSUSE-Leap-15.6 openSUSE Leap 15.6
SUSE-Linux-Enterprise-15-SP5 SUSE Linux Enterprise 15 SP5
SUSE-Linux-Enterprise-15-SP6 SUSE Linux Enterprise 15 SP6
openSUSE-Tumbleweed openSUSE Tumbleweed
wsl --install Ubuntu-24.04
Installing: Ubuntu 24.04 LTS
[===== 9.0% ]
you can move your files accordingly so do that if you want to save space or whatever. and make sure you update accordingly... and set sparse if you need to... PowerShell 7.5.0-preview.5
Loading personal and system profiles took 829ms.
winget update --all
No installed package found matching input criteria.
wsl --update
Checking for updates.
Updating Windows Subsystem for Linux to version: 2.3.24.
.
.
.
.
.
wsl --manage Ubuntu-24.04 --set-sparse true
Conversion in progress, this may take a few minutes.
The operation completed successfully.
pwsh MEM: 14% | 11/79GB 41ms
╭─ ♥ 05:34 |
╰─
and for windows users i strongly recommend installing xfe so that you don't have to worry too much about using the command line to get things done... ks@4080SUPER:~$ curl -s https://ohmyposh.dev/install.sh | bash -s
unzip is required to install Oh My Posh. Please install unzip and try again.
ks@4080SUPER:~$ sudo apt install unzip
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Suggested packages:
zip
The following NEW packages will be installed:
unzip
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 174 kB of archives.
After this operation, 384 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 unzip amd64 6.0-28ubuntu4.1 [174 kB]
Fetched 174 kB in 1s (182 kB/s)
Selecting previously unselected package unzip.
(Reading database ... 40787 files and directories currently installed.)
Preparing to unpack .../unzip_6.0-28ubuntu4.1_amd64.deb ...
Unpacking unzip (6.0-28ubuntu4.1) ...
Setting up unzip (6.0-28ubuntu4.1) ...
Processing triggers for man-db (2.12.0-4build2) ...
ks@4080SUPER:~$ curl -s https://ohmyposh.dev/install.sh | bash -s
⚠️ Installation directory /home/ks/.local/bin is not in your $PATH, add it using
export PATH=$PATH:/home/ks/.local/bin
ℹ️ Installing oh-my-posh for linux-amd64 in /home/ks/.local/bin
⬇️ Downloading oh-my-posh from https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/posh-linux-amd64
🎨 Installing oh-my-posh themes in /home/ks/.cache/oh-my-posh/themes
🚀 Installation complete.
You can follow the instructions at https://ohmyposh.dev/docs/installation/prompt
to setup your shell to use oh-my-posh.
If you want to use a built-in theme, you can find them in the /home/ks/.cache/oh-my-posh/themes directory:
oh-my-posh init {shell} --config /home/ks/.cache/oh-my-posh/themes/{theme}.omp.json
ks@4080SUPER:~$ sudo apt install xfe
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
turns out, xfe does pretty much how a windows user expects files to be moved around without much issues or headaches. i like it! also, use miniconda. turns out... it is a much better method to manage your environments... but you do you~! https://docs.anaconda.com/miniconda/ ┏[ ks from 4080SUPER][ 0s][ RAM: 0/39GB][ Monday at 5:51:27 AM]
┖[~]
└─Δ mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
--2024-11-04 05:52:49-- https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.32.241, 104.16.191.158, 2606:4700::6810:bf9e, ...
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.32.241|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 148337011 (141M) [application/octet-stream]
Saving to: ‘/home/ks/miniconda3/miniconda.sh’
/home/ks/miniconda3/miniconda.sh 100%[================================================================>] 141.46M 10.9MB/s in 13s
2024-11-04 05:53:01 (10.9 MB/s) - ‘/home/ks/miniconda3/miniconda.sh’ saved [148337011/148337011]
PREFIX=/home/ks/miniconda3
Unpacking payload ...
Installing base environment...
Preparing transaction: ...working... done
Executing transaction: ...working... done
installation finished.
┏[ ks from 4080SUPER][ 0.035s][ RAM: 1/39GB][ Monday at 5:53:07 AM]
┖[~]
└─Δ
and don't forget that conda should be the preferred method to install as much of the dependencies as possible. for eg, pytorch: ┏[ ks from 4080SUPER][ 0.004s][ RAM: 0/39GB][ Monday at 6:37:33 AM]
┖[~]
└─Δ conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch-nightly -c nvidia
Channels:
- pytorch-nightly
- nvidia
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/ks/ven
added / updated specs:
- pytorch
- pytorch-cuda=12.4
- torchaudio
- torchvision
.
.
.
.
.
.
after installing all the relevant softwares and dependencies . .bash_history .bashrc .conda .config .motd_shown .profile .tcshrc .zshrc ven
.. .bash_logout .cache .condarc .local .nv .sudo_as_admin_successful .xonshrc miniconda3
Mon Nov 4 07:24:24 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.51.01 Driver Version: 565.90 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4080 ... On | 00000000:0B:00.0 On | N/A |
| 0% 40C P0 13W / 320W | 685MiB / 16376MiB | 7% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
. .bash_history .bashrc .conda .config .motd_shown .profile .tcshrc .zshrc ven
.. .bash_logout .cache .condarc .local .nv .sudo_as_admin_successful .xonshrc miniconda3
. .requirements.txt.swp comfy fix_torch.py node_helpers.py temp
.. CODEOWNERS comfy_execution folder_paths.py nodes.py tests
.ci CONTRIBUTING.md comfy_extras input notebooks tests-unit
.git LICENSE comfyui_screenshot.png latent_preview.py output user
.gitattributes README.md cuda_malloc.py main.py pytest.ini utils
.github __pycache__ custom_nodes model_filemanager requirements.txt web
.gitignore api_server execution.py models script_examples
.pylintrc app extra_model_paths.yaml.example new_updater.py server.py
Use the following command to check which environment you have activated: conda info --envs
Use the following command to check which environment you have activated: conda activate <FOLLOWED BY THE env NAME from above>
-bash: /home/ks/.cargo/env: No such file or directory
┏[ ks from 4080SUPER][ 0s][ RAM: 16/39GB][ Monday at 7:24:25 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ conda activate ~/ven
┏[ ks from 4080SUPER][ 0.215s][ RAM: 16/39GB][ Monday at 7:24:36 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ conda info --envs
# conda environments:
#
base /home/ks/miniconda3
* /home/ks/ven
┏[ ks from 4080SUPER][ 0.626s][ RAM: 16/39GB][ Monday at 7:24:45 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ pip install torchaudio
Requirement already satisfied: torchaudio in /home/ks/ven/lib/python3.12/site-packages (2.5.0.dev20241103)
Requirement already satisfied: torch in /home/ks/ven/lib/python3.12/site-packages (from torchaudio) (2.5.1)
Requirement already satisfied: filelock in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (3.13.1)
Requirement already satisfied: typing-extensions>=4.8.0 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (4.11.0)
Requirement already satisfied: networkx in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (3.2.1)
Requirement already satisfied: jinja2 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (3.1.4)
Requirement already satisfied: fsspec in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (2024.3.1)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (9.1.0.70)
Requirement already satisfied: nvidia-cublas-cu12==12.4.5.8 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.5.8)
Requirement already satisfied: nvidia-cufft-cu12==11.2.1.3 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (11.2.1.3)
Requirement already satisfied: nvidia-curand-cu12==10.3.5.147 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (10.3.5.147)
Requirement already satisfied: nvidia-cusolver-cu12==11.6.1.9 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (11.6.1.9)
Requirement already satisfied: nvidia-cusparse-cu12==12.3.1.170 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.3.1.170)
Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (2.21.5)
Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-nvjitlink-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (12.4.127)
Requirement already satisfied: triton==3.1.0 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (3.1.0)
Requirement already satisfied: setuptools in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (75.1.0)
Requirement already satisfied: sympy==1.13.1 in /home/ks/ven/lib/python3.12/site-packages (from torch->torchaudio) (1.13.1)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/ks/ven/lib/python3.12/site-packages (from sympy==1.13.1->torch->torchaudio) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ks/ven/lib/python3.12/site-packages (from jinja2->torch->torchaudio) (2.1.3)
┏[ ks from 4080SUPER][ 0.419s][ RAM: 16/39GB][ Monday at 7:24:59 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ pip uninstal torchaudio
ERROR: unknown command "uninstal" - maybe you meant "uninstall"
┏[ ks from 4080SUPER][ 0.144s][ RAM: 16/39GB][ Monday at 7:25:06 AM][ master ≡ ?1][ Error, check your command]
┖[~/ven/comf]
└─Δ pip uninstall torchaudio
Found existing installation: torchaudio 2.5.0.dev20241103
Uninstalling torchaudio-2.5.0.dev20241103:
Would remove:
/home/ks/ven/lib/python3.12/site-packages/torchaudio
/home/ks/ven/lib/python3.12/site-packages/torchaudio-2.5.0.dev20241103-py3.12.egg-info
/home/ks/ven/lib/python3.12/site-packages/torio
Proceed (Y/n)? y
Successfully uninstalled torchaudio-2.5.0.dev20241103
┏[ ks from 4080SUPER][ 0.916s][ RAM: 16/39GB][ Monday at 7:25:12 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ pip install torchaudio
Collecting torchaudio
Downloading torchaudio-2.5.1-cp312-cp312-manylinux1_x86_64.whl.metadata (6.4 kB)
Requirement already satisfied: torch==2.5.1 in /home/ks/ven/lib/python3.12/site-packages (from torchaudio) (2.5.1)
Requirement already satisfied: filelock in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (3.13.1)
Requirement already satisfied: typing-extensions>=4.8.0 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (4.11.0)
Requirement already satisfied: networkx in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (3.2.1)
Requirement already satisfied: jinja2 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (3.1.4)
Requirement already satisfied: fsspec in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (2024.3.1)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (9.1.0.70)
Requirement already satisfied: nvidia-cublas-cu12==12.4.5.8 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.5.8)
Requirement already satisfied: nvidia-cufft-cu12==11.2.1.3 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (11.2.1.3)
Requirement already satisfied: nvidia-curand-cu12==10.3.5.147 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (10.3.5.147)
Requirement already satisfied: nvidia-cusolver-cu12==11.6.1.9 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (11.6.1.9)
Requirement already satisfied: nvidia-cusparse-cu12==12.3.1.170 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.3.1.170)
Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (2.21.5)
Requirement already satisfied: nvidia-nvtx-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.127)
Requirement already satisfied: nvidia-nvjitlink-cu12==12.4.127 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (12.4.127)
Requirement already satisfied: triton==3.1.0 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (3.1.0)
Requirement already satisfied: setuptools in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (75.1.0)
Requirement already satisfied: sympy==1.13.1 in /home/ks/ven/lib/python3.12/site-packages (from torch==2.5.1->torchaudio) (1.13.1)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/ks/ven/lib/python3.12/site-packages (from sympy==1.13.1->torch==2.5.1->torchaudio) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ks/ven/lib/python3.12/site-packages (from jinja2->torch==2.5.1->torchaudio) (2.1.3)
Downloading torchaudio-2.5.1-cp312-cp312-manylinux1_x86_64.whl (3.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 9.6 MB/s eta 0:00:00
Installing collected packages: torchaudio
Successfully installed torchaudio-2.5.1
┏[ ks from 4080SUPER][ 1.144s][ RAM: 16/39GB][ Monday at 7:25:17 AM][ master ≡ ?1]
┖[~/ven/comf]
└─Δ pyhon main.py
pyhon: command not found
┏[ ks from 4080SUPER][ 0.122s][ RAM: 16/39GB][ Monday at 7:25:21 AM][ master ≡ ?1][ Error, check your command]
┖[~/ven/comf]
└─Δ python main.py
Total VRAM 16376 MB, total RAM 40071 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /home/ks/ven/comf/web
/home/ks/ven/lib/python3.12/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Import times for custom nodes:
0.0 seconds: /home/ks/ven/comf/custom_nodes/websocket_image_save.py
Starting server
To see the GUI go to: http://127.0.0.1:8188
so installation works fine. ::editted:: i regret installing conda... so many broken dependencies issue... stick to pure python and create your environment in venv the only problem arising is when you need to install other tools in other languages and that's where conda come in. one of the modern attention methods requires a software that can be found in conda library but not pypi... so... will come to that bridge and solve it then. for now... back to python, create env using venv, and stick with pip. . .. .bash_history .bash_logout .bashrc .cache .config .local .motd_shown .profile .pyenv .sudo_as_admin_successful comf
Running (pyenv virtualenvs) this Lists all Python virtualenvs found in (/home/k/.pyenv/versions/*) in the next line
Running now.
3.12.7/envs/ven (created from /home/k/.pyenv/versions/3.12.7)
ven (created from /home/k/.pyenv/versions/3.12.7)
We are running (pyenv activate) to activate the listed environment (ven) next.
Running now
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
Tue Nov 5 23:55:31 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.51.01 Driver Version: 565.90 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4080 ... On | 00000000:0B:00.0 On | N/A |
| 0% 39C P0 14W / 320W | 718MiB / 16376MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 403 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+
VIRTUAL_ENV=/home/k/.pyenv/versions/3.12.7/envs/ven
PYENV_VIRTUAL_ENV=/home/k/.pyenv/versions/3.12.7/envs/ven
3.12.7/envs/ven (created from /home/k/.pyenv/versions/3.12.7)
* ven (created from /home/k/.pyenv/versions/3.12.7)
. .. .bash_history .bash_logout .bashrc .cache .config .local .motd_shown .profile .pyenv .sudo_as_admin_successful comf
. .pylintrc comfy extra_model_paths.yaml.example models requirements.txt
.. CODEOWNERS comfy_execution fix_torch.py new_updater.py script_examples
.ci CONTRIBUTING.md comfy_extras folder_paths.py node_helpers.py server.py
.git LICENSE comfyui_screenshot.png input nodes.py tests
.gitattributes README.md cuda_malloc.py latent_preview.py notebooks tests-unit
.github api_server custom_nodes main.py output utils
.gitignore app execution.py model_filemanager pytest.ini web
(ven) k@4080SUPER:~/comf$
the next comment will be about installing comfyui and comparing a fresh install with and without flash_attn. Stay tune~! |
All extensions are already up-to-date.
Restarting... [Legacy Mode]
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-11-06 00:12:25.402474
** Platform: Linux
** Python version: 3.12.7 (main, Nov 5 2024, 23:19:51) [GCC 13.2.0]
** Python executable: /home/k/.pyenv/versions/ven/bin/python
** ComfyUI Path: /home/k/comf
** Log path: /home/k/comf/comfyui.log
Prestartup times for custom nodes:
0.0 seconds: /home/k/comf/custom_nodes/rgthree-comfy
0.6 seconds: /home/k/comf/custom_nodes/ComfyUI-Manager
Total VRAM 16376 MB, total RAM 40071 MB
pytorch version: 2.6.0.dev20241105+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /home/k/comf/web
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `/home/k/comf/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 218 nodes successfully.
"Art is the mirror that reflects the beauty within us." - Unknown
[comfy_mtb] | INFO -> loaded 86 nodes successfuly
[comfy_mtb] | INFO -> Some nodes (2) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.
Total VRAM 16376 MB, total RAM 40071 MB
pytorch version: 2.6.0.dev20241105+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
[rgthree-comfy] Loaded 42 epic nodes. 🎉
### Loading: ComfyUI-Manager (V2.51.9)
### ComfyUI Revision: 2811 [8afb97cd] | Released on '2024-11-05'
[ReActor] - STATUS - Running v0.5.1-b2 in ComfyUI
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
Torch version: 2.6.0.dev20241105+cu124
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
Import times for custom nodes:
0.0 seconds: /home/k/comf/custom_nodes/websocket_image_save.py
0.0 seconds: /home/k/comf/custom_nodes/ComfyUI_JPS-Nodes
0.0 seconds: /home/k/comf/custom_nodes/ComfyUI-Image-Saver
0.0 seconds: /home/k/comf/custom_nodes/rgthree-comfy
0.0 seconds: /home/k/comf/custom_nodes/ComfyUI-KJNodes
0.0 seconds: /home/k/comf/custom_nodes/ComfyUI-Manager
0.0 seconds: /home/k/comf/custom_nodes/comfyui_face_parsing
0.2 seconds: /home/k/comf/custom_nodes/comfyui-reactor-node
1.0 seconds: /home/k/comf/custom_nodes/was-node-suite-comfyui
1.3 seconds: /home/k/comf/custom_nodes/comfy_mtb
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: /home/k/comf/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
okay. so i will use a freshly installed WSL2 with Ubuntu and a clean fresh new install of ComfyUI with all dependencies for the above image generations first. |
I installed Ubuntu but running from Windows 11 itself.
I am not sure if there are any real benefits to doing this... as I have no idea how to run benchmarks to verify if there are any improvement gains.
The text was updated successfully, but these errors were encountered: