Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: OpenAI requires tool_call_id #895

Open
AlexEnrique opened this issue Feb 11, 2025 · 3 comments · May be fixed by #933
Open

AssertionError: OpenAI requires tool_call_id #895

AlexEnrique opened this issue Feb 11, 2025 · 3 comments · May be fixed by #933
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@AlexEnrique
Copy link

AlexEnrique commented Feb 11, 2025

Description

Hi. I had run into an issue when switching models.
Basically, I have implemented an API endpoint where I can change models.

Where is what happened:

  • I started with gemini-1.5-flash, asking it what time is now, which would call my now() tool.
  • It runs without any problem returning the current datetime
  • Then I switched to gpt-4o-mini and asked the same question again, passing the message history I got after using Gemini
  • This causes the following exception: AssertionError: OpenAI requires tool_call_id to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')

[Edit] Minimal working example

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.models.gemini import GeminiModel

from datetime import datetime


open_ai_api_key = ...
gemini_api_key = ...


openai_model = OpenAIModel(
    model_name='gpt-4o-mini',
    api_key=open_ai_api_key,
)

gemini_model = GeminiModel(
    model_name='gemini-2.0-flash-exp',  # could be gemini-1.5-flash also
    api_key=gemini_api_key,
)

agent = Agent(gemini_model)


@agent.tool_plain
def now():
    return datetime.now().isoformat()


r1 = agent.run_sync('what is the current date time?')
print(r1.all_messages_json())

r2 = agent.run_sync(  # this will fail
    'what time is now?',
    model=openai_model,
    message_history=r1.all_messages(),
)
print(r2.all_messages_json())

Message history (stored until call gpt-4o-mini)

[ModelRequest(parts=[SystemPromptPart(content='\nYou are a test agent.\n\nYou must do what the user asks.\n', dynamic_ref=None, part_kind='system-prompt'), UserPromptPart(content='call now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 5, 628330, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
 ModelResponse(parts=[TextPart(content='I am sorry, I cannot fulfill this request. The available tools do not provide the functionality to make calls.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 6, 59052, tzinfo=TzInfo(UTC)), kind='response'),
 ModelRequest(parts=[UserPromptPart(content='call the tool now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 14, 394461, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
 ModelResponse(parts=[TextPart(content='I cannot call a tool.  The available tools are functions that I can execute, not entities that I can call in a telephone sense.  Is there something specific you would like me to do with one of the available tools?\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 15, 449295, tzinfo=TzInfo(UTC)), kind='response'),
 ModelRequest(parts=[UserPromptPart(content='what time is now?', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 23, 502937, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
 ModelResponse(parts=[ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 151395, tzinfo=TzInfo(UTC)), kind='response'),
 ModelRequest(parts=[ToolReturnPart(tool_name='now', content='2025-02-11T12:55:24.153651-03:00', tool_call_id=None, timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 153796, tzinfo=TzInfo(UTC)), part_kind='tool-return')], kind='request'),
 ModelResponse(parts=[TextPart(content='The current time is 2025-02-11 12:55:24 -03:00.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 560881, tzinfo=TzInfo(UTC)), kind='response')]

Traceback

Traceback (most recent call last):
  File "/app/agents/_agents/_wrapper.py", line 125, in run_stream
    async with self._agent.run_stream(
               ~~~~~~~~~~~~~~~~~~~~~~^
        user_prompt=user_prompt,
        ^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        deps=self.deps,
        ^^^^^^^^^^^^^^^
    ) as result:
    ^
  File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/agent.py", line 595, in run_stream
    async with node.run_to_result(GraphRunContext(graph_state, graph_deps)) as r:
               ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_agent_graph.py", line 415, in run_to_result
    async with ctx.deps.model.request_stream(
               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        ctx.state.message_history, model_settings, model_request_parameters
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ) as streamed_response:
    ^
  File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 160, in request_stream
    response = await self._completions_create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        messages, True, cast(OpenAIModelSettings, model_settings or {}), model_request_parameters
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 203, in _completions_create
    openai_messages = list(chain(*(self._map_message(m) for m in messages)))
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 267, in _map_message
    tool_calls.append(self._map_tool_call(item))
                      ~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 284, in _map_tool_call
    id=_guard_tool_call_id(t=t, model_source='OpenAI'),
       ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_utils.py", line 200, in guard_tool_call_id
    assert t.tool_call_id is not None, f'{model_source} requires `tool_call_id` to be set: {t}'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: OpenAI requires `tool_call_id` to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')

@AlexEnrique
Copy link
Author

AlexEnrique commented Feb 11, 2025

Also, I was wondering if exceptions raised by a tool (not ModelRetry) deserve a type for it and should go into the ModelMessage, to be in the message history

@sydney-runkle sydney-runkle added the bug Something isn't working label Feb 13, 2025
@sydney-runkle
Copy link
Member

Thanks for the report. Might be a good first issue for someone looking to help out with a bug fix.

Perhaps we should just use a dummy value here so that you can pass messages through in a model agnostic way...

@sydney-runkle sydney-runkle added the good first issue Good for newcomers label Feb 14, 2025
@AlexEnrique
Copy link
Author

@sydney-runkle
I was looking at the code, and I mapped the places where the tool_call_id may be or is None:

For Gemini

  • pydantic_ai.models.gemini:
    • _process_response_from_parts (line 514)

For Mistral

  • pydantic_ai.models.mistral:
    • MistralModel._map_mistral_to_pydantic_tool_call (line 332)
    • MistralStreamedResponse._try_get_result_tool_from_text (line 537)

Since we have models that expects tool_call_id to be not None, I would suggest to change the ToolCallPart to have non-nullable tool_call_id, and generate dummy ids in all of these places.

I'll try to make a PR for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants