You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the model used is determined by the llm string and the llm_provider_id. If we change the available llm models (e.g., because a provider is deprecating old models), the experiment is never alerted / there is no check.
Possible solutions:
When we edit a provider we could check the old set of models against the new and for any that have been removed we could generate an error / warning to go and update the experiment to first remove dependency on the model that is getting removed.
That solution is a little annoying for the user unless in the same edit-provider UI we allow them to select a replacement. For example, imagine the case where we're removing llama-3 and adding llama-3.1. The steps would be:
Edit the provider to remove llama-3 and add llama-3.1
Get an error that projects X, Y, Z are all reliant on llama-3 so we can't remove them yet
Change the edit to just add llama-3.1
Go manually update all projects relying on the old model
Return to the edit provider screen to remove llama-3
An alternative:
Edit the provider to remove llama-3 and add llama-3.1
Get an error that projects X, Y, Z are all reliant on llama-3 so we can't remove them yet. Ask for a replacement model that will be applied to all experiments that are affected
Confirm replacement and be done.
The text was updated successfully, but these errors were encountered:
Right now the model used is determined by the
llm
string and thellm_provider_id
. If we change the availablellm
models (e.g., because a provider is deprecating old models), the experiment is never alerted / there is no check.Possible solutions:
That solution is a little annoying for the user unless in the same edit-provider UI we allow them to select a replacement. For example, imagine the case where we're removing
llama-3
and addingllama-3.1
. The steps would be:llama-3
and addllama-3.1
llama-3
so we can't remove them yetllama-3.1
llama-3
An alternative:
llama-3
and addllama-3.1
llama-3
so we can't remove them yet. Ask for a replacement model that will be applied to all experiments that are affectedThe text was updated successfully, but these errors were encountered: