Replies: 2 comments 4 replies
-
It should be possible by adding it as a custom endpoint in the see: |
Beta Was this translation helpful? Give feedback.
-
Hello, I am experiencing difficulties accessing locally-installed Ollama or Mistral with LibreChat. I am using MacOS, and Ollama software (including Ollama 3.2 and Mistral 7B) is running separately from LibreChat, which is being operated through Docker. I exposed the Ollama API server using:
I added Ollama as a custom endpoint in the librechat.yaml:
After restarting Ollama and Docker, I am able to access Ollama on MacOS through my phone using Mac IP:11434 ("Ollama is running"), indicating it is exposed. I'm also able to access both models with Open WebUI and Obsidian. However, it does not function in LibreChat. It appears in menu but when I send any prompt, I receive an error message:
Do you have any idea why it does not function? |
Beta Was this translation helpful? Give feedback.
-
Hey all,
I'm exploring the possibility of connecting LibreChat running on a remote server to Ollama on a local PC. Specifically, I want to run LibreChat on a data center server without GPU capabilities and have it send requests to Ollama on my local machine, which has an RTX 4090 Ti(for example). I know this setup is possible with LobeChat (I've got it tested and working), but I'd prefer to implement it with LibreChat. Is this currently supported in LibreChat, or would it require additional development?
Beta Was this translation helpful? Give feedback.
All reactions