We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ref Discord conversation https://discord.com/channels/896944341598208070/1330217496988352656
Users should be able to see what exactly gets sent to the API (as the prompt).
Better observability for LLM calls
I propose that we could add a new method to the AgentBuilder (or Agent struct) to be able to see what the underlying prompt is.
AgentBuilder
Agent
Not really sure what the alternative is here
The text was updated successfully, but these errors were encountered:
Closing because there's probably a better way to do this as previously discussed in eng sync.
Sorry, something went wrong.
joshua-mo-143
Successfully merging a pull request may close this issue.
Feature Request
ref Discord conversation https://discord.com/channels/896944341598208070/1330217496988352656
Users should be able to see what exactly gets sent to the API (as the prompt).
Motivation
Better observability for LLM calls
Proposal
I propose that we could add a new method to the
AgentBuilder
(orAgent
struct) to be able to see what the underlying prompt is.Alternatives
Not really sure what the alternative is here
The text was updated successfully, but these errors were encountered: