-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: DIA-986: Upgrade OpenAI client version & pytest coverage #78
Conversation
completion = self._client.chat.completions.create( | ||
model=self.openai_model, messages=messages | ||
) | ||
completion_text = completion.choices[0].message.content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TY, this is much more readable now that it's not supporting both old and new openai formats
OPENAI_MODEL_RETRIEVE = "openai.resources.models.Models.retrieve" | ||
OPENAI_CHAT_COMPLETION = "openai.resources.chat.completions.Completions.create" | ||
OPENAI_EMBEDDING_CREATE = "openai.resources.embeddings.Embeddings.create" | ||
|
||
|
||
@dataclass | ||
class OpenaiChatCompletionMessageMock(object): | ||
content: str | ||
|
||
|
||
@dataclass | ||
class OpenaiChatCompletionChoiceMock(object): | ||
message: OpenaiChatCompletionMessageMock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a general strategy for how to mock openai responses in the future? If we only care about chat completions, this seems ok to start with, but there must be libraries that handle this for more than one endpoint in a cleaner way, eg you've mentioned https://github.com/polly3d/mockai. Also compare to what we're doing in pytest for LSE: https://github.com/HumanSignal/label-studio-enterprise/blob/develop/label_studio_enterprise/lse_tests/utils.py#L221
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to have a single testing tool to mock LLM clients. We can move them to label-studio-sdk
for example, to reuse across different apps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely happy with reducing the code surface area, but there other runtimes planned to support constrained generation? https://github.com/jxnl/instructor seems like a pretty solid low-overhead one to set up for example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's premature to extend it with different constrained generation frameworks until the agent workflow is well-established and tested. But overall, it would be great to follow a minimalistic high-level API to define constrained generation schema, similar to the example
Openai client migrated to v1 version
Tests and server mocks were fixed to address v1 compatibility
Some tests were removed since they don’t add additional logic → targeting to increase coverage as soon as reports will be ready (2 more tests are coming: AsyncOpenAIChatRuntime, server API)