fix: litellm conflict with openai on lib override #480
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #471
original_oai_create
andoriginal_oai_create_async
variables since we no longer override OpenAI's methodsoverride()
andundo_override()
to only handle LiteLLM's methods_override_completion()
and_override_async_completion()
to only store and patch LiteLLM's methodsThis way, when both providers are used:
A bit more of explanation
LiteLLM's completion method is completely separate from OpenAI's
Completions.create
Even though LiteLLM uses OpenAI's format internally, it has its own implementation
When we call
litellm.completion()
, it doesn't actually call OpenAI'sCompletions.create
Tests
Breakdown
test_provider_override_independence()
:test_provider_override_order_independence()
:What's being verified