[Bug]: Custom Endpoint LocalAI "Failed to fetch models from OpenAI API Request failed with status code 404" #1627
Replies: 4 comments 2 replies
-
The model fetch fail: it is literally calling OpenAI and that's why it fails, it may be unrelated to your issue. Check the logs in Do you get no output at all? An immediate empty response? Or does it just hang with the blinking cursor? I have not tested LocalAI with LibreChat in a long time but I remember it working. I will try to get it setup again to test for you. What do the LocalAI logs look like, does it ever reach the server? If LocalAI is running on docker, try |
Beta Was this translation helpful? Give feedback.
-
"Do you get no output at all? An immediate empty response? Or does it just hang with the blinking cursor?" After I submit chat, I just get constant blinking and no response "What do the LocalAI logs look like, does it ever reach the server?" The logs for LocalAI show no errors, or anything. So it doesn't look as if anything is happening on the LocalAi end of things Here is the debug from LibreChat:
I think the issue must be something being wrong with the content of the custom config file (librechat.yaml) that links LibreChat to the custom endpoint... Or something in the .env file. Considering there's no issues with Mistral and other endpoints, it all should work with LocalAI and its just a matter of setting a few variables correctly. I did see it looks like this problem was resolved in an older issue: #1027 |
Beta Was this translation helpful? Give feedback.
-
Do you want to fetch the models? Otherwise, then try changing the fetch key to false |
Beta Was this translation helpful? Give feedback.
-
For the record, I didn't notice before but you shouldn't use the env variables in combination with the custom endpoints as they can introduce conflicts
|
Beta Was this translation helpful? Give feedback.
-
What happened?
Having a hard time figuring out how to use LocalAI as a custom endpoint with LibreChat
Keep getting an error stating that the models were not able to be fetched, but the model does show in LibreChat and is selectable. Whenever I try to type anything to the chat, the GPT just ends and then this error is displayed in the log
example of .env:
example of librechat.yaml :
models at https://localai.example.com/v1/models :
Steps to Reproduce
What browsers are you seeing the problem on?
No response
Relevant log output
Screenshots
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions