Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: DIA-986: Upgrade OpenAI client version & pytest coverage #78

Merged
merged 6 commits into from
Apr 1, 2024

Conversation

niklub
Copy link
Contributor

@niklub niklub commented Apr 1, 2024

  • Openai client migrated to v1 version

  • Tests and server mocks were fixed to address v1 compatibility

  • Some tests were removed since they don’t add additional logic → targeting to increase coverage as soon as reports will be ready (2 more tests are coming: AsyncOpenAIChatRuntime, server API)

@robot-ci-heartex
Copy link
Collaborator

@robot-ci-heartex
Copy link
Collaborator

@robot-ci-heartex
Copy link
Collaborator

@robot-ci-heartex
Copy link
Collaborator

@robot-ci-heartex
Copy link
Collaborator

Comment on lines +188 to +191
completion = self._client.chat.completions.create(
model=self.openai_model, messages=messages
)
completion_text = completion.choices[0].message.content
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TY, this is much more readable now that it's not supporting both old and new openai formats

Comment on lines +11 to +23
OPENAI_MODEL_RETRIEVE = "openai.resources.models.Models.retrieve"
OPENAI_CHAT_COMPLETION = "openai.resources.chat.completions.Completions.create"
OPENAI_EMBEDDING_CREATE = "openai.resources.embeddings.Embeddings.create"


@dataclass
class OpenaiChatCompletionMessageMock(object):
content: str


@dataclass
class OpenaiChatCompletionChoiceMock(object):
message: OpenaiChatCompletionMessageMock
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a general strategy for how to mock openai responses in the future? If we only care about chat completions, this seems ok to start with, but there must be libraries that handle this for more than one endpoint in a cleaner way, eg you've mentioned https://github.com/polly3d/mockai. Also compare to what we're doing in pytest for LSE: https://github.com/HumanSignal/label-studio-enterprise/blob/develop/label_studio_enterprise/lse_tests/utils.py#L221

Copy link
Contributor Author

@niklub niklub Apr 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense to have a single testing tool to mock LLM clients. We can move them to label-studio-sdk for example, to reuse across different apps

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely happy with reducing the code surface area, but there other runtimes planned to support constrained generation? https://github.com/jxnl/instructor seems like a pretty solid low-overhead one to set up for example

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's premature to extend it with different constrained generation frameworks until the agent workflow is well-established and tested. But overall, it would be great to follow a minimalistic high-level API to define constrained generation schema, similar to the example

@farioas farioas removed their request for review April 1, 2024 14:27
@niklub niklub merged commit d955895 into master Apr 1, 2024
15 checks passed
@niklub niklub deleted the fb-DIA-986 branch April 1, 2024 14:30
@robot-ci-heartex
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants