We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For performance reasons it is desirable to keep the number of requests to the connected Prometheus instance small.
To achieve that it would be nice to have some kind if in-memory caching layer between the language server and parts of the Prometheus v1 API.
@gotjosh mentioned that reusing the internal data structures of Prometheus implemented here would help reduce the memory footprint of such a cache.
The text was updated successfully, but these errors were encountered:
Before working on this should be performed some testing to figure out how much load on the Prometheus server the language server actually causes.
Sorry, something went wrong.
No branches or pull requests
For performance reasons it is desirable to keep the number of requests to the connected Prometheus instance small.
To achieve that it would be nice to have some kind if in-memory caching layer between the language server and parts of the Prometheus v1 API.
@gotjosh mentioned that reusing the internal data structures of Prometheus implemented here would help reduce the memory footprint of such a cache.
The text was updated successfully, but these errors were encountered: