-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support/Documentation for Agent handoffs #219
Comments
Hey @Bradley-Butcher @agent_a.tool
async def transfer_to_agent_b(ctx: RunContext[HypothesisDeps]):
return await ctx.deps.agent_b.run(...) Is exactly what I'd suggest, it just needs documenting. This might overlap with #120. |
inspired by the example of #120 @dataclass
class Deps:
agent_add: Agent
agent_writer: Agent
class PoemModel(BaseModel):
"""the model returned by the agent
:param _type_ BaseModel: _description_
:return _type_: _description_
"""
text: str = Field(description="the poem generated by the agent")
class FinalResponse(BaseModel):
"""the final answer of the query
:param _type_ BaseModel: _description_
:return _type_: _description_
"""
output: str | PoemModel
agent = Agent(
model,
system_prompt="""
You are a helpful assistant.
Reroute to the appropriate agent based on the user's query.
""",
result_type=FinalResponse)
writer_agent = Agent(
model,
system_prompt="Write about a given topic.",
result_type=PoemModel)
add_agent = Agent(
model,
system_prompt="return the sum of the 2 numbers")
deps = Deps(agent_add=add_agent, agent_writer=writer_agent)
@agent.tool
def transfer_to_writer_agent(ctx: RunContext, topic: str):
"""
Ask writer_agent to write a poem about a given topic.
"""
print(f"topic for poem: {topic}")
result = ctx.deps.agent_writer.run_sync(f"Write a poem about {topic}.")
return result
@agent.tool
def transfer_to_math_agent(ctx: RunContext, a: int, b: int) -> str:
"""
Ask add_agent to add 2 numbers
"""
print(f"numbers to add: {a}, {b}")
result = ctx.deps.agent_add.run_sync(f"add {a} and {b}").data
return result
r0 = writer_agent.run_sync(user_prompt="write a poem in 1 verse about Pydantic ?")
# 21:32:00.516 writer_agent run prompt=write a poem in 1 verse about Pydantic ?
# 21:32:00.518 preparing model and tools run_step=1
# 21:32:00.519 model request
# 21:32:02.183 handle model response
r0.data
# PoemModel(text="In the realm of Python's embrace, Pydantic reigns with graceful pace, \nData models, pure and bright, \nValidation dances in the light.")
result = agent.run_sync("what is the sum of 2 and 2 equal to ?", deps=deps)
# 21:44:29.366 agent run prompt=what is the sum of 2 and 2 equal to ?
# 21:44:29.367 preparing model and tools run_step=1
# 21:44:29.368 model request
# 21:44:30.151 handle model response
# 21:44:30.154 running tools=['transfer_to_math_agent']
# 21:44:31.061 preparing model and tools run_step=2
# 21:44:31.062 model request
# 21:44:32.494 handle model response
print(f"{result.data}, {type(result.data)}")
# output='The sum of 2 and 2 is 4.', <class '__main__.FinalResponse'>
result = agent.run_sync("write a poem about quantum physics in 1 verse of 3 lines", deps=deps)
# 21:51:56.415 agent run prompt=write a poem about quantum physics in 1 verse of 3 lines
# 21:51:56.416 preparing model and tools run_step=1
# 21:51:56.417 model request
# 21:51:57.025 handle model response
# 21:51:57.026 running tools=['transfer_to_writer_agent']
# 21:52:01.735 preparing model and tools run_step=2
# 21:52:01.736 model request
# 21:52:03.369 handle model response
print(f"{result.data}, {type(result.data)}")
# output='In whispers of the cosmic dance, \nWhere particles in shadows prance, \nA world unseen, so strange yet bright.', <class '__main__.FinalResponse'>
|
class MathModel(BaseModel):
"""the model returned by the math agent
:param _type_ BaseModel: _description_
:return _type_: _description_
"""
val: int = Field(description="the result of the operation")
class PoemModel(BaseModel):
"""the model returned by the writer agent
:param _type_ BaseModel: _description_
:return _type_: _description_
"""
text: str = Field(description="the poem generated by the agent")
class FinalResponse(BaseModel):
"""the final answer of the query
:param _type_ BaseModel: _description_
:return _type_: _description_
"""
output: MathModel | PoemModel = Field(description="the final answer of the query")
[...]
@agent.tool
def transfer_to_writer_agent(ctx: RunContext, topic: str):
"""
Ask writer_agent to write a poem about a given topic.
"""
print(f"topic for poem: {topic}")
result = ctx.deps.agent_writer.run_sync(f"Write a poem about {topic}.").data
return result
@agent.tool
def transfer_to_math_agent(ctx: RunContext, a: int, b: int):
"""
Ask add_agent to add 2 numbers
"""
print(f"numbers to add: {a}, {b}")
result = ctx.deps.agent_add.run_sync(f"add {a} and {b}").data
return result result = agent.run_sync("what is the sum of 3 and 3 equal to ?", deps=deps)
print(f"{result.data}, {type(result.data)}")
# output=MathModel(val=6), <class '__main__.FinalResponse'>
print(f"{result.data.output.val}, {type(result.data.output.val)}")
# 6, <class 'int'>
result = agent.run_sync("write a poem about quantum physics in 1 verse of 3 lines", deps=deps)
print(f"{result.data}, {type(result.data)}")
# output=PoemModel(text='In realms where particles dance and play,\nWhere the fabric of time stretches thin,\nWhispers of secrets in particles sway.'), <class '__main__.FinalResponse'>
print(f"{result.data.output.text}, {type(result.data.output.text)}")
# In the realm where particles play,
# Dancing shadows in a quantum ballet,
# Tiny whispers of uncertainty reign., <class 'str'> |
@jonathanbouchet thank you for this example but unless I'm missing something, it's not the solution to OPs problem. Neither is @samuelcolvin suggestion. You might get away with it in a simple example like this, but I don't think it will work on realistic examples that need to scale. In both examples, we're basically just using another agent to run a single prediction step. It's not a handover like in true multi-agent systems. Try implementing the still very simple Triage Agent example from Swarm. I don't think it will work without ugly workarounds. Think of it in terms of the "telephone" analogy: OP is asking how one agent on the other end of the line can put them through to another agent. What you're suggesting is that the original agent calls the next one themself and simply passes on the answer. That's why Swarm introduces the notion of a Happy to be wrong though! |
Hey! I'm a big fan of the framework so far. Is there a plan for agent handoffs? For example, in the demo library OpenAI released, swarm:
Is there some intended pattern for this? I think this could work, though I'm yet to try it
I am excited to hear your plans for multi-agent functionality. It helps when you've got too many tools to cram into a single agent.
p.s. If I missed some page in the documentation on this then I apologize!
The text was updated successfully, but these errors were encountered: