-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird bug where malformed API request causes model to analyze error message #151
Comments
The output of my netcat (nc) command:
|
Similarly if I do this curl command with no headers, no json message, etc, I get random German text: $ curl -X POST http://10.0.0.1:9990/v1/chat/completions That yielded:
In case you're curious, google translate says:
My initial concern was that I thought some malformed request caused some error in dllama-api that was "seen" by the LLM, but now I'm thinking it might just be the model's random response to an empty query? So maybe this isn't a valid bug/issue after all? Either way, leaving it so it's known.. maybe we check for an empty or malformed prompt before doing inference on it? |
I know this is contrived but I had it happen just now and wanted to report it as a bug.
I ran dllama-api in one window (used llama3_2_3b_instruct_q40).
Then in another window I used netcat (nc) to connect into port 9990.
I manually typed:
"POST /v1/chat/completions HTTP/1.0"
(without the quotes) then hit enter, then it waited for the headers with the args I'd give, and I hit enter again. I expected an error, but it looks like some error was actually seen by the LLM (you can see the few GET commands that I tried by hand before the POST, then see the rest of the output.. this was in the dllama-api window:
While it was amusing, it might actually be used as an attack surface.. Wanted to mention it.
The text was updated successfully, but these errors were encountered: