You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the new o1 model (version 2024-12-17) via litellm, I’m unable to send image content that works when calling OpenAI’s API directly. Direct calls to the OpenAI API work correctly with both jpg and png image URLs. However, attempting the same request through litellm results in the following error:
curl:
curl --request POST \
--url http://localhost:4000/v1/chat/completions \
--header 'Content-Type: application/json' \
--data '{
"model": "openai/o1",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
]
}
'
error:
{
"error": {
"message": "litellm.APIError: APIError: OpenAIException - Image content is not supported for O-1 models. Set litellm.drop_param to True to drop image content.\nReceived Model Group=openai/o1\nAvailable Model Group Fallbacks=None",
"type": null,
"param": null,
"code": "500"
}
}
Expected Behavior:
Sending image content to the o1 model via litellm should work as it does when calling the OpenAI API directly.
No need to set litellm.drop_param just to handle image content.
Additional Context:
Even though function calling issues have been marked as resolved (see Issue #7292), the problem still persists. Please revisit this issue as well.
Request:
Add proper image support for the o1 model in litellm.
Reopen or re-check the previously mentioned function calling issue.
Thanks!
Relevant log output
{
"error": {
"message": "litellm.APIError: APIError: OpenAIException - Image content is not supported for O-1 models. Set litellm.drop_param to True to drop image content.\nReceived Model Group=openai/o1\nAvailable Model Group Fallbacks=None",
"type": null,
"param": null,
"code": "500"
}
}
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.55.4
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
Even though function calling issues have been marked as resolved (see #7292), the problem still persists. Please revisit this issue as well.
@mvrodrig the fix for function calling just went out in v1.55.6. Github closes tickets once pr's with fixes land on main. You can see the attached version release when clicking on the commit which merges it in.
What happened?
Hello @krrishdholakia !
When using the new o1 model (version 2024-12-17) via litellm, I’m unable to send image content that works when calling OpenAI’s API directly. Direct calls to the OpenAI API work correctly with both jpg and png image URLs. However, attempting the same request through litellm results in the following error:
Expected Behavior:
litellm.drop_param
just to handle image content.Additional Context:
Even though function calling issues have been marked as resolved (see Issue #7292), the problem still persists. Please revisit this issue as well.
Request:
Thanks!
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.55.4
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: