Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiteLLM Minor Fixes & Improvements (2024/12/18) p1 #7295

Merged
merged 17 commits into from
Dec 19, 2024

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Dec 18, 2024

  • fix(health.md): add rerank model health check information

  • build(model_prices_and_context_window.json): add gemini 2.0 for google ai studio - pricing + commercial rate limits

  • build(model_prices_and_context_window.json): add gemini-2.0 supports audio output = true

  • docs(team_model_add.md): clarify allowing teams to add models is an enterprise feature

  • fix(o1_transformation.py): add support for 'n', 'response_format' and 'stop' params for o1 and 'stream_options' param for o1-mini

  • build(model_prices_and_context_window.json): add 'supports_system_message' to supporting openai models

needed as o1-preview, and o1-mini models don't support 'system message

  • fix(o1_transformation.py): translate system message based on if o1 model supports it

  • fix(o1_transformation.py): return 'stream' param support if o1-mini/o1-preview

o1 currently doesn't support streaming, but the other model versions do

Fixes #7292

  • fix(o1_transformation.py): return tool calling/response_format in supported params if model map says so

Fixes #7292

  • fix: fix linting errors

  • fix: update '_transform_messages'

  • fix(o1_transformation.py): fix provider passed for supported param checks

  • test(base_llm_unit_tests.py): skip test if api takes >5s to respond

  • fix(utils.py): return false in 'supports_factory' if can't find value

  • fix(o1_transformation.py): always return stream + stream_options as supported params + handle stream options being passed in for azure o1

  • feat(openai.py): support stream faking natively in openai handler

Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview

Fixes #7292

  • fix(openai.py): use inference param instead of original optional param

Copy link

vercel bot commented Dec 18, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 19, 2024 2:37am

…e ai studio - pricing + commercial rate limits
… 'stop' params for o1 and 'stream_options' param for o1-mini
…sage' to supporting openai models

needed as o1-preview, and o1-mini models don't support 'system message
…1-preview

o1 currently doesn't support streaming, but the other model versions do

Fixes #7292
)
except Exception:
verbose_logger.debug(
f"Unable to infer model provider for model={model}, defaulting to openai for o1 supported param check"

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expressi

Copilot Autofix AI 7 days ago

To fix the problem, we should avoid logging sensitive information directly. Instead, we can log a generic message that does not include the sensitive data. This way, we maintain the ability to log useful information for debugging purposes without exposing sensitive data.

  • Replace the log message that includes the model variable with a more generic message.
  • Ensure that no sensitive information is included in the log message.
Suggested changeset 1
litellm/llms/openai/chat/o1_transformation.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/litellm/llms/openai/chat/o1_transformation.py b/litellm/llms/openai/chat/o1_transformation.py
--- a/litellm/llms/openai/chat/o1_transformation.py
+++ b/litellm/llms/openai/chat/o1_transformation.py
@@ -68,3 +68,3 @@
             verbose_logger.debug(
-                f"Unable to infer model provider for model={model}, defaulting to openai for o1 supported param check"
+                "Unable to infer model provider, defaulting to openai for o1 supported param check"
             )
EOF
@@ -68,3 +68,3 @@
verbose_logger.debug(
f"Unable to infer model provider for model={model}, defaulting to openai for o1 supported param check"
"Unable to infer model provider, defaulting to openai for o1 supported param check"
)
Copilot is powered by AI and may make mistakes. Always verify output.
Positive Feedback
Negative Feedback

Provide additional feedback

Please help us improve GitHub Copilot by sharing more details about this comment.

Please select one or more of the options
…upported params + handle stream options being passed in for azure o1
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview

 Fixes #7292
Copy link

codecov bot commented Dec 19, 2024

Codecov Report

Attention: Patch coverage is 99.06542% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...llm/litellm_core_utils/prompt_templates/factory.py 0.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@krrishdholakia krrishdholakia merged commit 5253f63 into main Dec 19, 2024
26 of 27 checks passed
@krrishdholakia krrishdholakia changed the title fix(health.md): add rerank model health check information LiteLLM Minor Fixes & Improvements (2024/12/18) p1 Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Function Calling Not Working with New o1 Model via litellm
1 participant