Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Metadata to LLM Calls in Logfire #740

Open
PierreColombo opened this issue Dec 28, 2024 · 3 comments
Open

Add Metadata to LLM Calls in Logfire #740

PierreColombo opened this issue Dec 28, 2024 · 3 comments

Comments

@PierreColombo
Copy link

Description

Implement a feature in Logfire that allows users to attach custom metadata to each LLM API call. This enhancement would enable more detailed tracking, filtering, and analysis of LLM interactions within the Logfire observability platform and better understanding the tokens consumption.

@antoni0z
Copy link

antoni0z commented Jan 2, 2025

Usign spans allows you to do this, like this:

with logfire.span("ai_flow_1", prompt_id = 5, ... Here you can add as much fields as you need ):
    llm.call() # pseodocode of whatever llm you use.

And llms in logfire right now are traced by provider (openai, anthropic), I agree that it would be great if they gone deeper into the llm specific filtering apart from doing it from spans, but not clear how without disrupting the openai, anthropic apis.

Would be great if it can be done generally for all llm type traces.

@alexmojaki
Copy link
Contributor

it would be great if they gone deeper into the llm specific filtering apart from doing it from spans

@antoni0z can you please clarify this?

@antoni0z
Copy link

antoni0z commented Jan 2, 2025

it would be great if they gone deeper into the llm specific filtering apart from doing it from spans

@antoni0z can you please clarify this?

Yes, I think that having special attributes for llms is needed.

Because right now, this is being done specific for openai traces and anthropic traces.

So if I want to do a dashboard for the tokens consumed is difficult to do It on a general way on the llms because I have to filter the anthropic way and the openai way.

If for each llm call some attributes are added, like tokens spent, function calls used, and other llm specific attributes this will help with better general llm filtering.

And also will help libraries like lanchain to be able to integrate better with logfire, if it follows this tracing abstractions.

Its just an idea so take it with a grain of salt, what do you think? Does logfire have some of this in mind to become even more helpful in the llms specific context?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants
@alexmojaki @PierreColombo @antoni0z and others