-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(llm): add Anthropic LLM client #31
Conversation
Trivy scanning results. |
Code Coverage Summary
Diff against main
Results for commit: 030c8f5 Minimum allowed coverage is ♻️ This comment has been updated with latest results |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
||
Args: | ||
prompt: Prompt as an Anthropic client style list. | ||
response_format: Optional argument used in the OpenAI API - used to force the json output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really a comment to this PR, but having an argument in AnthropicClient
that is specific to OpenAI clearly illustrates that we should refactor this option away (both here and in the prompt object itself). In a separate PR/ticket of course.
Maybe for example we could have a expect_json
as a general boolen flag that each LLM client could implement in a way that makes sense for the particular model/API (by passing the appropriate option for the particular API or even by using post-processing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. I was also thinking of moving response_format
argument to LLMOptions
, which would make the interface cleaner. What do you think?
cc @mhordynski
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think response_format
(or its non-OpenAI specific equivalent) should remain coupled with the prompt (i.e., be part of the PromptTemplate class), because whether we expect json fully depend on the prompt itself.
Closing this PR, we decided to use the litellm API to integrate with Anthropic. |
This PR adds support for Anthropic's Claude models.