Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Fix Regex for identifying OpenAI models #778

Merged
merged 5 commits into from
May 1, 2024

Conversation

alexander-brady
Copy link
Contributor

@alexander-brady alexander-brady commented Apr 24, 2024

Addresses the issue with the model naming convention used by Azure's OpenAI API. It updates the regex pattern to accommodate Azure's distinct format for their 3.5 models, which differs from the OpenAI API's convention. The specific change involves recognizing gpt-35-turbo as opposed to gpt-3.5-turbo.

Additionally, the deployment path check has been changed to handle scenarios where pathlib.parts does not include the deployment path, ensuring more robust path validation during Azure deployments.

Closes #761

…consider cases pathlib.parts doesn't contain deployment
@riedgar-ms riedgar-ms changed the title Fix Regex for Model Naming Convention in Azure (Issue #761) [Bug] Fix Regex for identifying OpenAI models Apr 24, 2024
@riedgar-ms riedgar-ms self-requested a review April 24, 2024 15:04
@Harsha-Nori
Copy link
Collaborator

Harsha-Nori commented Apr 24, 2024

Hi @alexander-brady and @riedgar-ms, we may want to just move away from the regex-based scheme altogether. To my knowledge, only the gpt-3.5-turbo-instruct line of models uses the legacy Completion interface anymore, so we can perhaps just assume that the model is a ChatCompletion model with a special case exception for the -instruct suffixed models. Thoughts?

EDIT: Upon a bit of investigation, it looks like it is specifically these three model families that leverage the legacy completion endpoints:

https://platform.openai.com/docs/models/model-endpoint-compatibility

gpt-3.5-turbo-instruct, babbage-002, davinci-002

And of these, I don't believe babbage or davinci get any type of meaningful traffic.

@alexander-brady
Copy link
Contributor Author

I looked into it as well, and it seems that is the case for Azure. Per the docs, OpenAI recommends not using the GPT base models babbage or davinci, instead recommending GPT-3.5/GPT-4.

I've therefore updated my pull request to check for -instruct instead of regex matching for both Azure and OpenAI.

@codecov-commenter
Copy link

codecov-commenter commented Apr 24, 2024

Codecov Report

Attention: Patch coverage is 0% with 6 lines in your changes are missing coverage. Please review.

Project coverage is 62.54%. Comparing base (92afe33) to head (837053c).

Files Patch % Lines
guidance/models/_azure_openai.py 0.00% 5 Missing ⚠️
guidance/models/_openai.py 0.00% 1 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #778      +/-   ##
==========================================
+ Coverage   55.38%   62.54%   +7.16%     
==========================================
  Files          55       55              
  Lines        4079     4074       -5     
==========================================
+ Hits         2259     2548     +289     
+ Misses       1820     1526     -294     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@riedgar-ms
Copy link
Collaborator

I shall take this opportunity to bring up a point I've mentioned before: do we really want to try autodetecting model types like this at all?

We are creating something inherently fragile, just to extract a single bit of information (chat vs completion). In the code, this has also led to a rather interesting coding pattern involving mutually recursive constructors, and objects which change their types dynamically.

As things stand, we are making life very marginally easier for newcomers. However, when it breaks (and given that we do not control the names of the OpenAI models, breakage is inevitable) I fear that those same newcomers will be in a worse position.

@alexander-brady
Copy link
Contributor Author

As a newcomer, not detecting my model as a chat model made me reconsider even using this framework, and I almost gave up on it altogether before deciding to take a closer look at the source code.

However, since the majority of OpenAI models use the chat API, I think it's reasonable to make the model automatically be set to chat mode, with a parameter to change this if necessary. Especially if there's an error message that pops up, telling users how to fix it simply by changing the parameter.

@riedgar-ms
Copy link
Collaborator

My thoughts are more tending towards renoving the AzureOpenAI class, and having users instantiate AzureOpenAIChat or AzureOpenAICompletion as appropriate (and adjusting the latter two classes to have properly named arguments, rather than *args, **kwargs).

Copy link
Collaborator

@riedgar-ms riedgar-ms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Despite my thoughts about whether this is really something we want to be doing, this should fix our immediate issues. @Harsha-Nori ?

@riedgar-ms riedgar-ms merged commit acb38d1 into guidance-ai:main May 1, 2024
98 of 99 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AzureOpenAI Chat Completions not supported
4 participants