-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU limits and requests totals not always matching sum of containers requests/limits #53
Comments
Hey @chilanti, thanks for asking about this! I'm guessing this is a bug, do you happen to know which version of kube-capacity you're using? |
@robscott - thanks for getting back - I guess it's 0.5.0:
|
There are some more recent bug fixes in the latest release (0.6.1), I'm hoping they'll fix your issue, but let me know if not. |
Hi Rob - I just upgraded to 0.6.1, but that particular issue is still there. I still see that the pod limits are set to 2, but the only active container has limits=400m. |
Hi @robscott |
Thanks for the detailed bug report @chilanti and the reminder @kmlefebv! I misunderstood this the first time, but after digging a bit further, it looks like the requests and limits are using a k8s util function: https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L130. This appears to result in the max value of init or active requests/limits: https://github.com/kubernetes/kubernetes/blob/33de444861d3de783a6618be9d10fa84da1c11b4/pkg/api/v1/resource/helpers.go#L56 as you'd suggested above. As far as actually printing out results, it looks like I'm completely ignoring init containers, that could/should probably be fixed, but I'm not sure how to properly differentiate init containers without further complicating the output. |
@robscott - thanks for getting back to us. I've opened a request for enhancement against kubectl - I think that if we had a function that returned just the totals of the running containers, the tool could easily output two sets of summaries:
|
First of all - great tool - simple to use and powerful.
We noticed that in some cases the totals that the tool rolls up at the pod level do not match the total cpu limits/requests of the actual containers in the pods. This seems to happen when the pod has init containers that specify cpu requests/limits. For example:
In this case, there's only one active container in the pod and its
cpu.limits
are 400m - but the total reported at the pod level sayscpu.limits
is 2. We looked at the pod definition on the actual cluster and saw that it has an init container whosecpu.limits
are in fact 2:At this point we are left wondering whether this is an expected behavior - and if it is, whether the tool picks up the greater of the two values or just picks the first one for the pod.
Thanks.
The text was updated successfully, but these errors were encountered: