-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: check for latest OpenAI model names #6027
fix: check for latest OpenAI model names #6027
Conversation
@@ -270,7 +272,7 @@ def _ensure_token_limit(self, prompt: Union[str, List[Dict[str, str]]]) -> Union | |||
|
|||
@classmethod | |||
def supports(cls, model_name_or_path: str, **kwargs) -> bool: | |||
valid_model = model_name_or_path in ["ada", "babbage", "davinci", "curie", "gpt-3.5-turbo-instruct"] or any( | |||
valid_model = model_name_or_path in OPEN_AI_MODEL_NAMES or any( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work for the snapshot models (e.g. gpt-3.5-turbo-0613
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. :/ I just added the one that was missing for us. Basically we need to come up with a rule to match all OpenAI models or somehow with a hard coded rule or implement some kind of dynamic validation by e.g. fetching all available models and check if the chosen one is one of them.
Happy to add extra model names
Hello! A general note.
Check out the two different haystack/haystack/nodes/prompt/invocation_layer/open_ai.py Lines 272 to 276 in 3803d23
haystack/haystack/nodes/prompt/invocation_layer/chatgpt.py Lines 106 to 111 in 3803d23
My impression is that these last models are supported by the |
@@ -20,6 +20,8 @@ | |||
|
|||
logger = logging.getLogger(__name__) | |||
|
|||
OPEN_AI_MODEL_NAMES = ["ada", "babbage", "davinci", "curie", "gpt-3.5-turbo-instruct", "gpt-3.5-turbo", "gpt-4"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about making this configurable via env variables?
Okay! Makes sense. We will add an extra step to validate also the ChatGPT invocation layer. Thanks for you help! |
Related Issues
Proposed Changes:
How did you test it?
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.