Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evals #915

Open
samuelcolvin opened this issue Feb 12, 2025 · 0 comments · May be fixed by #935
Open

Evals #915

samuelcolvin opened this issue Feb 12, 2025 · 0 comments · May be fixed by #935
Assignees
Labels
Feature request New feature request

Comments

@samuelcolvin
Copy link
Member

samuelcolvin commented Feb 12, 2025

(Or Evils as I'm coming to think of them)

We want to build an open-source, sane way to score the performance of LLM calls that is:

  • local first - so you don't need to use a service
  • flexible enough to work with whatever best practice emerges — ideally usable for any code that is stochastic enough to require scoring beyond passed/failed (that means LLM SDKs directly or even other agent frameworks)
  • usable both for "offline evals" (unit-test style checks on performance) and "online evals" measuring performance in production or equivalent (presumably using an observability platform like Pydantic Logfire)
  • usable with Pydantic Logfire when and where that actually helps

I believe @dmontagu has a plan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request New feature request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants