Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Function Calling Not Working with New o1 Model via litellm #7292

Closed
mvrodrig opened this issue Dec 18, 2024 · 13 comments · Fixed by #7295
Closed

[Bug]: Function Calling Not Working with New o1 Model via litellm #7292

mvrodrig opened this issue Dec 18, 2024 · 13 comments · Fixed by #7295
Assignees
Labels
bug Something isn't working

Comments

@mvrodrig
Copy link

What happened?

A bug happened!

Hi @krrishdholakia,

I'm encountering an issue when trying to use function calling with the new o1 model (2024-12-17 version) through litellm. The same request works perfectly fine when calling the OpenAI API directly, but fails when using litellm.

Steps to Reproduce:

  1. Make a direct call to the OpenAI endpoint (works as expected):
curl --request POST \
  --url https://api.openai.com/v1/chat/completions \
  --header 'Authorization: Bearer sk-****' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "o1",
    "messages": [
      {
        "role": "user",
        "content": "What'\''s the weather like in Paris today?"
      }
    ],
    "functions": [
      {
        "name": "get_weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {"type": "string"}
          }
        }
      }
    ]
  }'

Expected Result: The OpenAI API responds successfully with a function call:

{
  "id": "chatcmpl-...",
  "object": "chat.completion",
  "created": 1734538508,
  "model": "o1-2024-12-17",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "get_weather",
          "arguments": "{\"location\":\"Paris\"}"
        }
      },
      "finish_reason": "function_call"
    }
  ],
  ...
}

  1. Attempt the same logic using litellm:
curl --request POST \
  --url http://localhost:4000/v1/chat/completions \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "openai/o1",
    "stream": false,
    "messages": [
      {
        "role": "user",
        "content": "What'\''s the weather like in Boston today?"
      }
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
              },
              "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
              }
            },
            "required": ["location"]
          }
        }
      }
    ]
  }'

Actual Result: I receive the following error:

{
	"error": {
		"message": "litellm.UnsupportedParamsError: openai does not support parameters: {'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}}}]}, for model=o1. To drop these, set `litellm.drop_params=True` or for proxy:\n\n`litellm_settings:\n drop_params: true`\n\nReceived Model Group=openai/o1\nAvailable Model Group Fallbacks=None",
		"type": "None",
		"param": null,
		"code": "400"
	}
}

  1. I've also tried using drop_params: true as suggested. When I set:
litellm_settings:
  drop_params: true

I then receive the following response:

{
	"id": "chatcmpl-AfrN76ojCuj6RQQXPyVWdFWAqStSg",
	"created": 1734539973,
	"model": "o1-2024-12-17",
	"object": "chat.completion",
	"system_fingerprint": "fp_28cbeb3597",
	"choices": [
		{
			"finish_reason": "stop",
			"index": 0,
			"message": {
				"content": "I’m sorry, but I don’t have real-time information about the current weather in Boston. You may want to check a reliable weather service or app (such as the National Weather Service, Weather.com, or a local Boston news station) for up-to-date conditions and forecasts.",
				"role": "assistant",
				"tool_calls": null,
				"function_call": null
			}
		}
	],
	"usage": {
		"completion_tokens": 322,
		"prompt_tokens": 14,
		"total_tokens": 336,
		"completion_tokens_details": {
			"accepted_prediction_tokens": 0,
			"audio_tokens": 0,
			"reasoning_tokens": 256,
			"rejected_prediction_tokens": 0
		},
		"prompt_tokens_details": {
			"audio_tokens": 0,
			"cached_tokens": 0
		}
	},
	"service_tier": null
}

Of course this indicates that the function call capability is lost and the assistant only returns a generic response without calling the function.

Any guidance on how to properly format the request or if there’s a workaround would be greatly appreciated.
Thanks!

Relevant log output

{
	"error": {
		"message": "litellm.UnsupportedParamsError: openai does not support parameters: {'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}}}]}, for model=o1. To drop these, set `litellm.drop_params=True` or for proxy:\n\n`litellm_settings:\n drop_params: true`\n\nReceived Model Group=openai/o1\nAvailable Model Group Fallbacks=None",
		"type": "None",
		"param": null,
		"code": "400"
	}
}

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.55.3

Twitter / LinkedIn details

No response

@mvrodrig mvrodrig added the bug Something isn't working label Dec 18, 2024
@krrishdholakia krrishdholakia self-assigned this Dec 18, 2024
@krrishdholakia
Copy link
Contributor

acknowledging this - thanks for the issue. will work on it today.

@krrishdholakia
Copy link
Contributor

krrishdholakia commented Dec 18, 2024

Going through list of openai params and seeing what is not yet supported by o1

  • "logprobs", ❌
  • "tools", (❌ o1-mini, ✅ o1)
  • "tool_choice", (❌ o1-mini, ✅ o1)
  • "parallel_tool_calls", ❌
  • "function_call", (❌ o1-mini, ✅ o1) [assume same as tool calling]
  • "functions", (❌ o1-mini, ✅ o1) [assume same as tool calling]
  • "top_p", ❌
  • "n", ✅
  • "presence_penalty", ❌
  • "frequency_penalty", ❌
  • "top_logprobs", ❌
  • "response_format", ✅ (for response_format={"type": "json_object"} - the word 'json' needs to exist in messages, we should add this automatically if litellm.modify_params=True), ❌ o1-mini
  • "stop", ✅
  • stream - supported on o1-mini, not on o1
  • "stream_options", supported on o1-mini, not on o1 (as stream not supported)

@krrishdholakia
Copy link
Contributor

o1-mini also doesn't support the system role

however this now works for o1

@krrishdholakia
Copy link
Contributor

krrishdholakia commented Dec 18, 2024

"latest o1 model supports both text and image inputs"

  • url seems to work with o1
  • url does not work with o1-mini (b64 works though)

@krrishdholakia krrishdholakia pinned this issue Dec 18, 2024
@krrishdholakia
Copy link
Contributor

also need to map new 'developer' role to 'system' role for non-openai providers

@krrishdholakia
Copy link
Contributor

krrishdholakia commented Dec 18, 2024

  • add conditional logic for o1 system message handling
  • support fake streaming on o1 calls (don't fake on o1 mini call)
  • allow tool calling/response_schema for o1 vs. o1-mini

krrishdholakia added a commit that referenced this issue Dec 18, 2024
…1-preview

o1 currently doesn't support streaming, but the other model versions do

Fixes #7292
krrishdholakia added a commit that referenced this issue Dec 18, 2024
@afbarbaro
Copy link
Contributor

support fake streaming on o1 calls (don't fake on o1 mini call)

I thought the new o1 model supports "real" streaming. no?

@afbarbaro
Copy link
Contributor

we're looking to upgrade litellm to be able to use the o1 model released yesterday.

@krrishdholakia
Copy link
Contributor

here's what i see @afbarbaro

Screenshot 2024-12-18 at 3 31 48 PM

it works for:

  • o1-mini
  • o1-preview

working on adding fake streaming for o1, so it doesn't cause issues in any client code

i assume they'll eventually roll out real streaming for o1

krrishdholakia added a commit that referenced this issue Dec 19, 2024
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview

 Fixes #7292
@mvrodrig
Copy link
Author

Hi @krrishdholakia ,

Is it possible that I still get the error related to the function calling when using o1, with the latest litellm version (v1.55.4)?

{
	"error": {
		"message": "litellm.UnsupportedParamsError: openai does not support parameters: {'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}}}]}, for model=o1. To drop these, set `litellm.drop_params=True` or for proxy:\n\n`litellm_settings:\n drop_params: true`\n\nReceived Model Group=openai/o1\nAvailable Model Group Fallbacks=None",
		"type": "None",
		"param": null,
		"code": "400"
	}
}

@mperezjodal
Copy link

Hello! I'm having the same issue as @mvrodrig

Error code: 400 - {'error': {'message': 'litellm.UnsupportedParamsError: openai does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': ...

@krrishdholakia
Copy link
Contributor

this fix went out right now on v1.55.6

please bump and let me know if the issue persists

@mvrodrig
Copy link
Author

Thanks @krrishdholakia, I confirm the issue is solved!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants