-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation on how to use structured output with AgentChat agents #5043
Comments
thanks @BenConstable9 |
@ekzhu - I feel like this has been discussed recently - is there another relevant issue? |
There was a PR a while back that add the response format option to model client. It is already available for So a fix can be to add |
Update. Submitted a fix #5116
This is not necessary. Just need to set import asyncio
from typing import Literal
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
class AgentResponse(BaseModel):
thoughts: str
response: Literal["happy", "sad", "neutral"]
async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
response_format=AgentResponse, # type: ignore
)
agent = AssistantAgent(
"assistant",
model_client=model_client,
system_message="Categorize the input as happy, sad, or neutral following the JSON format.",
)
await Console(agent.run_stream(task="I am happy."))
asyncio.run(main())
So for this issue let's just update the documentation to show how to do this with AssistantAgent. |
Thanks, I will try it out |
Does this work with |
It should, did you use the right model version? Read: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs?tabs=python-secure |
Thank you! For anyone looking for solution: I used model |
What feature would you like to be added?
Support for open ai structured output mode in agent chat. The ability to give an agent a Pydantic class that it must adhere to in the response.
This could be dumped to json as a message for intermediate agents?
Why is this needed?
This would make integrating AutoGen with api requests easier. e.g. consider an answer agent at the end of a complex agentic group chat. It might need to formulate the answer in a specific json format, once it takes into account all the previous messages.
OpenAI structured data mode makes the likelihood of a correct JSON structure far more likely than prompting alone.
The text was updated successfully, but these errors were encountered: