-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LiteLLM Minor Fixes & Improvements (2024/12/18) p1 #7295
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
…e ai studio - pricing + commercial rate limits
…audio output = true
…nterprise feature
… 'stop' params for o1 and 'stream_options' param for o1-mini
…sage' to supporting openai models needed as o1-preview, and o1-mini models don't support 'system message
…1-preview o1 currently doesn't support streaming, but the other model versions do Fixes #7292
…ported params if model map says so Fixes #7292
) | ||
except Exception: | ||
verbose_logger.debug( | ||
f"Unable to infer model provider for model={model}, defaulting to openai for o1 supported param check" |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expression logs
sensitive data (secret)
This expressi
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix AI 7 days ago
To fix the problem, we should avoid logging sensitive information directly. Instead, we can log a generic message that does not include the sensitive data. This way, we maintain the ability to log useful information for debugging purposes without exposing sensitive data.
- Replace the log message that includes the
model
variable with a more generic message. - Ensure that no sensitive information is included in the log message.
-
Copy modified line R69
@@ -68,3 +68,3 @@ | ||
verbose_logger.debug( | ||
f"Unable to infer model provider for model={model}, defaulting to openai for o1 supported param check" | ||
"Unable to infer model provider, defaulting to openai for o1 supported param check" | ||
) |
…upported params + handle stream options being passed in for azure o1
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview Fixes #7292
Codecov ReportAttention: Patch coverage is
📢 Thoughts on this report? Let us know! |
fix(health.md): add rerank model health check information
build(model_prices_and_context_window.json): add gemini 2.0 for google ai studio - pricing + commercial rate limits
build(model_prices_and_context_window.json): add gemini-2.0 supports audio output = true
docs(team_model_add.md): clarify allowing teams to add models is an enterprise feature
fix(o1_transformation.py): add support for 'n', 'response_format' and 'stop' params for o1 and 'stream_options' param for o1-mini
build(model_prices_and_context_window.json): add 'supports_system_message' to supporting openai models
needed as o1-preview, and o1-mini models don't support 'system message
fix(o1_transformation.py): translate system message based on if o1 model supports it
fix(o1_transformation.py): return 'stream' param support if o1-mini/o1-preview
o1 currently doesn't support streaming, but the other model versions do
Fixes #7292
Fixes #7292
fix: fix linting errors
fix: update '_transform_messages'
fix(o1_transformation.py): fix provider passed for supported param checks
test(base_llm_unit_tests.py): skip test if api takes >5s to respond
fix(utils.py): return false in 'supports_factory' if can't find value
fix(o1_transformation.py): always return stream + stream_options as supported params + handle stream options being passed in for azure o1
feat(openai.py): support stream faking natively in openai handler
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview
Fixes #7292