Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: View agent's static context #239

Closed
1 task done
joshua-mo-143 opened this issue Jan 24, 2025 · 1 comment
Closed
1 task done

feat: View agent's static context #239

joshua-mo-143 opened this issue Jan 24, 2025 · 1 comment
Assignees
Milestone

Comments

@joshua-mo-143
Copy link
Contributor

  • I have looked for existing issues (including closed) about this

Feature Request

ref Discord conversation https://discord.com/channels/896944341598208070/1330217496988352656

Users should be able to see what exactly gets sent to the API (as the prompt).

Motivation

Better observability for LLM calls

Proposal

I propose that we could add a new method to the AgentBuilder (or Agent struct) to be able to see what the underlying prompt is.

Alternatives

Not really sure what the alternative is here

@joshua-mo-143
Copy link
Contributor Author

Closing because there's probably a better way to do this as previously discussed in eng sync.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants