-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support both tool call and message text handling in response #675
Comments
This should have been fixed by #468, if not please let us know what's missing. |
@samuelcolvin this issue is referring to behaviour of |
Oh, sorry, I was going too quickly. We support a mixture of both messages and tool calls, but it's not clear what we should do with the messages if there are tool calls. What would you like to happen? |
I think that in the case of a tool call ending a run (#142) then the behaviour is quite straightforward - just return the text message as the result of the run (assuming the tool is side-effect based and doesn't return something useful itself). What to do with text content of tool calls mid-run is less clear, but I think it would be useful to at least have a mechanism to store and make them accessible for the developer to decide what to do with them? |
This issue is stale, and will be closed in 3 days if no reply is received. |
Not stale |
So, currently, if you allow text as a final response, and get a response with both tool calls and text, we don't treat the text as the final output (we assume the results of the tool calls should be provided back to the model before it produces a final response). While this would be easy to change, I think in many important cases the current behavior is desirable. In particular, I believe @sydney-runkle has run into cases recently (I believe while using anthropic?) where, when making tool calls, the model would generally include some text in the response describing the tool calls being made. That said, I totally understand why you might want to treat the text part of the response as the final result even when tool calls are present, so I think the main question here is — how should we make it possible to control this behavior? I think it might be reasonable to add a setting for this to |
Just to clarify, if tool run-ending like #142 was implemented, then it would still be possible to inspect the message history of the run and find the text part of the So the aim would be to potentially make that content more readily accessible in this scenario? like maybe assigning it to the What would |
Side note, this looks similar to #149 |
Could this, #142 and #677 be addressed by breaking up the Is that whats happening here #725 ? I guess it's a trade-off of simplicity for common use cases vs flexibility for edge cases. |
There is a big difference between how non-streaming and streaming currently handle this. What you described is the behavior for non-streaming responses -- tool calls in the presence of text are still executed. For streaming responses, when not using a result tool, it is the opposite -- any text causes an I would suggest the client be able to supply a callback to determine when a response is the end based on |
At the moment it appears that only only Tool calls OR message text processing is supported, not both at the same time.
I'm not sure if all LLMs support providing both, but it appears some do:
OpenAI example
Anthropic documentation appears to indicate it is possible
There are cases where a LLM may perform a tool call as a final response (doesn't need to see the results), along with an associated relevant message. Enabling handling of both would avoid an additional request/response cycle to the LLM.
I think this is somewhat related to this issue: #127, and this PR: #142, however slightly different since I believe those two are about whether a tool call can end a "conversation", which would also be a necessary capability to resolve this issue, but would also involve additional capability to handle both the tool call and then return the message text as the "final message" content.
The text was updated successfully, but these errors were encountered: