-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] Allow the user to switch model and continue session #4605
Comments
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
I played with the hosted version a bit tonight, and it looks like this is working, due to conversations. I used an experimental version of Gemini (with 32k context), it hit the limit, then I changed it to a model with a higher limit. When I changed model, it dumped me on the home page, but since sid is in the url, the link to the last conversation took me back, and the agent continued just fine with the new model. I think we can just remove the warning, and let the user go back to the same page, not home page. WDYT? |
I believe there was a brief discussion about this suggested behaviour with @rbren. I don't see it being a common case that users will switch models mid-session, but we can support avoiding redirection when it does happen, so it makes sense to me. |
I think this was solved with the recent Settings window revamp, please feel free to reopen if not. |
What problem or use case are you trying to solve?
Allow the user to switch the model without necessarily starting a new session.
Do you have thoughts on the technical implementation?
I'm not sure why we wouldn't allow switching model. There's no reason in the backend why switching model wouldn't just work, I don't think. If I recall correctly, this used to work with the UI too. I'll note that the user will lose history if we force a new session. Why not let them try a better/different model for their current project?
To be sure, there are settings which have to trigger a runtime reload, which we might call new session, but model/base url/api key are not. In fact, I think all LLMConfig settings would work just fine, backend-wise, in the same session.
The text was updated successfully, but these errors were encountered: