-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bucket bytes download metric by filetype #5816
Comments
@jlsherrill What's the difference between this and #5815 |
Actually, I had in mind another issue. I think these are duplicates. We should close one of them. |
This was for the existing metric related to download bytes (how much data was downloaded), #5815 is for a new metric to measure the number of files downloaded. |
We will add an additional attribute to the existing metric. The attribute will describe the type of file served: rpm or metadata. |
Nope. The attribute will describe the file extension. It is plausible that we will extract this information from the header: pulpcore/pulpcore/content/handler.py Line 392 in dfa6dfb
|
We decided to record the file extension only when the HTTP response code is equal to 3XX or 2XX. |
I have just realized that we do not emit the metric in case the caching is disabled... 🎱 |
We have integrated this into our pulp-service plugin. After careful consideration, we plan to remove the metric entirely and replace it by utilizing access logs (similarly to #5815 (comment)). @dkliban noted that it is possible to pass a header value (which is used by the current |
Is your feature request related to a problem? Please describe.
Currently the content type emits download metrics that feature bytes downloaded.
For the rpm plugin, we would like to know what is costing more bandwidth: metadata or rpms.
Describe the solution you'd like
One solution would be to bucket the metrics by filetype (using the extension of the file), so we can query just the rpms:
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: