Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating queries from the Grafana Explore page #42

Open
vanugrah opened this issue Jan 15, 2025 · 4 comments
Open

Integrating queries from the Grafana Explore page #42

vanugrah opened this issue Jan 15, 2025 · 4 comments

Comments

@vanugrah
Copy link

Hey folks,

First of all a big thanks to the Perses team for creating this utility. Understanding metrics_usage is a critical first step to identifying unused/underused metrics. However one critical piece that is currently missing is ad-hoc queries from Grafana ( or your tool of choice).

Any interest in adding this feature to this project? A "simple" approach would be to parse the query logs generated by Grafana within a period of time and extract out the queries and by extension the underlying time series.

This is an are that we'll be investing time in this quarter so I'm trying to gauge whether this feature would be of interest to this project. If so, we'd love to collaborate on a solution.

@vanugrah
Copy link
Author

On a unrelated note - a static analyzer is a nice starting point, but have you also considered an online approach that keeps track of metrics usage over time? My thought is if this is turned into a service that is constantly polling the right sources and capturing a real time state of metrics_usage, then it could be used to auto downsample or even filter unused/underused metrics. A "Poor mans adaptive metrics" if you will.

@Nexucis
Copy link
Member

Nexucis commented Jan 16, 2025

Thank you for your interest in this tool !

  1. Actually analysing runtime query is more complex than just parsing logs. Because once we would say "ok let's analyse runtime query", we cannot just looking at the one Grafana is firing. We would have to support Perses, Thanos, Prometheus ...etc.
    And actually @nicolastakashi is already providing such tool : https://github.com/nicolastakashi/prom-analytics-proxy. So maybe it can help you.

If you are more looking for a combination of both, I have heard @nicolastakashi is working on another tool that combines both, but I have no idea about the status.

  1. So the current issue I am seeing with this idea, is if a metric_name is removed from the timeseries database, then metric_usage won't be able to remove it from its database as we don't compare what is coming and what exists.
    We cannot do that because metrics_usage is working in a asynchronous environment. In an environment where you have various Prometheus, then if one of doesn't respond (which happens more frequently than we can think), we shouldn't remove the metric.

Other all the problem is "simply" a database synchronisation to be able to remove metric_name when it is necessary. And keeping consistency across various database across different network zone is a complex problem.
I don't say it's not possible but I don't think currently it worth to add a complex code to address this complex problem as today metrics_usage is quite simple in its execution and design.

@nicolastakashi
Copy link
Contributor

Hey folks 👋🏽
Thanks for mention me and prom analytics proxy @Nexucis
Yes, I just make prom analytics proxy a metrics-usage backend where you can push metrics usage and correlate with ad-hoc queries.

I'm missing a last implementation that I'll finish today and I'll release a new version with a clear documentation on how to achieve that integration.

I hope it will help you @vanugrah

@nicolastakashi
Copy link
Contributor

@vanugrah I just released a new version of prom-analytics-proxy with the metrics-usage integration, let me know your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants