Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IMPORT FAILED in ComfyUI #76

Closed
aifuzz59 opened this issue Apr 18, 2024 · 3 comments
Closed

IMPORT FAILED in ComfyUI #76

aifuzz59 opened this issue Apr 18, 2024 · 3 comments

Comments

@aifuzz59
Copy link

The Import failed for theses nodes

File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes_init
.py", line 46, in
check_requirements_installed(llama_cpp_agent_path)
File "D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes_init_.py", line 35, in check_requirements_installed
subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', *missing_packages])
File "subprocess.py", line 413, in check_call
subprocess.CalledProcessError: Command '['D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\python_embeded\python.exe', '-s', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

Cannot import D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: Command '['D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\python_embeded\python.exe', '-s', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

I have updated ComfyUI and it still wont work. Any ideas?

@gokayfem
Copy link
Owner

change the llama-cpp-agent version inside cpp_agent_req.txt to llama-cpp-agent==0.0.17

@ricperry
Copy link

ricperry commented Jun 22, 2024

change the llama-cpp-agent version inside cpp_agent_req.txt to llama-cpp-agent==0.0.17

This doesn't work on Linux + ROCm

Is there a way you can hook in to the ollama api?

@cornpo
Copy link

cornpo commented Oct 14, 2024

You probably figured it out now. But you have to do something like 'CC=hipcc CXX=hipcc pip install ...' and probably also have Cuda installed.

@gokayfem gokayfem closed this as completed Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants