-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to add open router api #289
Comments
In the WebUI's This should hopefully do the job for now. |
But how to choose a model then?
…On Thu, Feb 13, 2025, 9:06 AM marginal23326 ***@***.***> wrote:
In the WebUI, you can put the base url/endpoint as
https://openrouter.ai/api/v1 with your OpenRouter API. Make sure to
choose the provider as openai from the selector.
—
Reply to this email directly, view it on GitHub
<#289 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADYOUQM77CGVBE2BOIYOUMD2PTGIJAVCNFSM6AAAAABXCT4VBCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJXGIZTGOBXGA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
[image: marginal23326]*marginal23326* left a comment
(browser-use/web-ui#289)
<#289 (comment)>
In the WebUI, you can put the base url/endpoint as
https://openrouter.ai/api/v1 with your OpenRouter API. Make sure to
choose the provider as openai from the selector.
—
Reply to this email directly, view it on GitHub
<#289 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADYOUQM77CGVBE2BOIYOUMD2PTGIJAVCNFSM6AAAAABXCT4VBCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJXGIZTGOBXGA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
You can type the name of the model in the I haven't tested it, but it should work. |
I second this request for openrouter.ai compatibility. I've put the model in after pointing to openrouter url, but sometimes some of it works, most of it doesn't. Deepseek R1 models is erroring. V3 sometimes works. Sometimes nothing. Llama models give errors. Qwen seemed to work usually but certainly isn't as smart as some of the others. Marginal's suggestion works, but sometimes it seems and only with some models. I tried selecting deepseek and using the openrouter api and it still didn't work with R1. I just started messing with this today. Is there any way to log the output and input to and from the LLM to see exactly what we're sending and what we're getting back? I'm happy to test some stuff if I can help. |
Make sure that the "Use Vision" checkbox is turned off when using the DeepSeek model, and others as well, that do not support image data. Refer to this answer:
I think the logs you are looking for are already stored under Also, it would be helpful if you could share the errors you encountered in the terminal while trying to use some of the models. |
Please consider the inclusion of an option to implement an open router API.
The text was updated successfully, but these errors were encountered: