You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just curious why bearer token is used to determine model location, why not just a parameter in the json like "model" which is used to select model in openai? This to me seems more intuitive but perhaps you have a reason for bearer token?
It would allow a more "seamless" approach to changing the model between requests for example. (Although currently the code doesn't check model path is the same (ie: to restart model or not, just last messages)).
I think this IS the point of the "model" parameter for openai, just they know the location and it is by name, and here it is the location itself, although you could assume location (/llama.cpp/models) and just use the filename as model name. This seems more to me to be a 1:1 behaviour, no?
Happy to submit a PR to alter the api key check with a fall back to req checking the json.
The text was updated successfully, but these errors were encountered:
Just curious why bearer token is used to determine model location, why not just a parameter in the json like "model" which is used to select model in openai? This to me seems more intuitive but perhaps you have a reason for bearer token?
It would allow a more "seamless" approach to changing the model between requests for example. (Although currently the code doesn't check model path is the same (ie: to restart model or not, just last messages)).
I think this IS the point of the "model" parameter for openai, just they know the location and it is by name, and here it is the location itself, although you could assume location (/llama.cpp/models) and just use the filename as model name. This seems more to me to be a 1:1 behaviour, no?
Happy to submit a PR to alter the api key check with a fall back to req checking the json.
The text was updated successfully, but these errors were encountered: