You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we examine a pod in the current /General/Perf/Container Utilization/% mem used over limit chart we can see the usage as:
However if we open the /General/Kubernetes/Compute Resources/Pod graph the usage is much much much less:
In
/General/Perf/Container Utilization/Mem
the attribute used is:
container_memory_usage_bytes
In
/General/Kubernetes/Compute Resources/Pod
the attribute used is:
container_memory_working_set_bytes
Overall, we cant use the /General/Kubernetes/Compute Resources/Pod because I want to compare all compute sessions that run overnight overlaid on top of each other, and it would take to long to go through them one by one.
We would instead prefer the /General/Perf/Container Utilization/Mem chart to use container_memory_working_set_bytes instead.
According to A Deep Dive into Kubernetes Metrics — Part 3 Container Resource Metrics | by Bob Cotton | FreshTracks.io: You might think that memory utilization is easily tracked with container_memory_usage_bytes, however, this metric also includes cached (think filesystem cache) items that can be evicted under memory pressure. The better metric is container_memory_working_set_bytes as this is what the OOM killer is watching for.
The text was updated successfully, but these errors were encountered:
Thanks for reporting this @Carus11. We ship a number of Grafana dashboards obtained by a number of sources. The Kubernetes/Compute Resources/Pod dashboard is one pulled from the Grafana community. The Perf/Container Utilization/Mem dashboard (and the other Perf/* dashboards) were developed by an internal testing team here at SAS focused on performance. As you have discovered, different dashboards will surface different metrics and depending on what you are trying to do or understand, some dashboards will be better for some use-cases.
Unfortunately, due to resource constraints, we haven't been able to research and document where each dashboard is most useful and/or make all of the improvements to them we would like. However, we welcome feedback from people using the dashboards in real-world situations like yourself on which dashboards and metrics are the most useful or where there might be opportunities for improvements. That will help us prioritize the changes we do make.
If we examine a pod in the current /General/Perf/Container Utilization/% mem used over limit chart we can see the usage as:
However if we open the /General/Kubernetes/Compute Resources/Pod graph the usage is much much much less:
In
/General/Perf/Container Utilization/Mem
the attribute used is:
container_memory_usage_bytes
In
/General/Kubernetes/Compute Resources/Pod
the attribute used is:
container_memory_working_set_bytes
Overall, we cant use the /General/Kubernetes/Compute Resources/Pod because I want to compare all compute sessions that run overnight overlaid on top of each other, and it would take to long to go through them one by one.
We would instead prefer the /General/Perf/Container Utilization/Mem chart to use container_memory_working_set_bytes instead.
According to A Deep Dive into Kubernetes Metrics — Part 3 Container Resource Metrics | by Bob Cotton | FreshTracks.io: You might think that memory utilization is easily tracked with container_memory_usage_bytes, however, this metric also includes cached (think filesystem cache) items that can be evicted under memory pressure. The better metric is container_memory_working_set_bytes as this is what the OOM killer is watching for.
The text was updated successfully, but these errors were encountered: