Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to add open router api #289

Open
sudip358 opened this issue Feb 13, 2025 · 5 comments
Open

Add option to add open router api #289

sudip358 opened this issue Feb 13, 2025 · 5 comments

Comments

@sudip358
Copy link

Please consider the inclusion of an option to implement an open router API.

@marginal23326
Copy link
Contributor

marginal23326 commented Feb 13, 2025

In the WebUI's LLM Configuration tab, you can put the Base URL as https://openrouter.ai/api/v1 with your OpenRouter API key. Make sure to choose the provider as openai from the LLM Provider selector (which should be the default).
Then you can type any model OpenRouter hosts in the Model Name section.

This should hopefully do the job for now.

@Sandv
Copy link

Sandv commented Feb 13, 2025 via email

@marginal23326
Copy link
Contributor

marginal23326 commented Feb 13, 2025

But how to choose a model then?

You can type the name of the model in the Model Name input field inside the LLM Configuration tab. It has to match whatever OpenRouter uses.

I haven't tested it, but it should work.

@xxxJustSomeBoringLadxxx

I second this request for openrouter.ai compatibility. I've put the model in after pointing to openrouter url, but sometimes some of it works, most of it doesn't. Deepseek R1 models is erroring. V3 sometimes works. Sometimes nothing. Llama models give errors. Qwen seemed to work usually but certainly isn't as smart as some of the others.

Marginal's suggestion works, but sometimes it seems and only with some models. I tried selecting deepseek and using the openrouter api and it still didn't work with R1.

I just started messing with this today. Is there any way to log the output and input to and from the LLM to see exactly what we're sending and what we're getting back? I'm happy to test some stuff if I can help.

@marginal23326
Copy link
Contributor

Deepseek R1 models is erroring.

Make sure that the "Use Vision" checkbox is turned off when using the DeepSeek model, and others as well, that do not support image data. Refer to this answer:

#243 (comment)

Is there any way to log the output and input to and from the LLM to see exactly what we're sending and what we're getting back?

I think the logs you are looking for are already stored under \tmp\agent_history in JSON format.

Also, it would be helpful if you could share the errors you encountered in the terminal while trying to use some of the models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants