-
Notifications
You must be signed in to change notification settings - Fork 11.6k
Eval bug: llama-bench seems to be broken #13169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
llama-bench -m Qwen3-0.6B-Q8_0.gguf
|
|
I've had problems as well since the merge of #13096. I found an inconsistency between the values and the fields which caused llama-bench to crash when printing the output of the tests. Opened a PR, not sure if that is the correct fix, but I hope it at least gets help to bring visibility to the issue |
It worked:
|
Name and Version
version: 5215 (5f5e39e)
built with MSVC 19.43.34808.0 for x64
i've tested CPU, Vulkan, and SYCL, llama-bench either crashes and burns, or outputs the following 2 lines and then exits:
Operating systems
Windows
GGML backends
CPU
Hardware
Ryzen 7900X + Intel A770
Models
i tried several which are currently working with llama-server
Problem description & steps to reproduce
.\llama-bench.exe -m model
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: