Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load the wrong GGUF model causes (core dump) crash #84

Closed
liusida opened this issue May 8, 2024 · 0 comments
Closed

Load the wrong GGUF model causes (core dump) crash #84

liusida opened this issue May 8, 2024 · 0 comments

Comments

@liusida
Copy link

liusida commented May 8, 2024

I've downloaded two gguf model files and put them in the folder as instructed.

Then I added two nodes in Comfy as shown in the picture to load the models.
image

However, I didn't pay attention to which model files I should select, I thought by default it'll pick the right model file. And when I run the workflow, the whole system crashed with (core dump).

It took me quite a while to discover that the "Llava Clip Loader" was trying to load the main Llava model, in my case it was llava-v1.6-mistral-7b.Q3_K_XS.gguf, and if I chose the right one, in my case it was mmproj-model-f16.gguf, it works.

I think it'll be great to choose the right one by default, or have some error check instead of relying on llama-cpp. C++ is dangerous.

@gokayfem gokayfem closed this as completed Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants