Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Experimental k8s config (DO NOT MERGE) #356

Closed
wants to merge 2 commits into from

Conversation

i-chvets
Copy link
Contributor

@i-chvets i-chvets commented Jun 4, 2024

WIP: Exeprimental K8S config

Description

Configuration options for CPU resources for OWGW.

Summary of changes:

  • Modified code to use env vat to specify number of CPUs for generic scheduler. If not set, system value is taken.
  • Modified code to use configuration parameter to specify number of CPUs. If not set, system value is taken.

Summary of changes:
- Modified code to use configuration parameter to specify number of CPUs. If not set, system value is taken.

Signed-off-by: Ivan Chvets <[email protected]>
Summary of changes:
- Modified code to use env vat to specify number of CPUs for generic scheduler. If not set, system value is taken.

Signed-off-by: Ivan Chvets <[email protected]>
@i-chvets i-chvets marked this pull request as draft June 4, 2024 19:10
Copy link
Contributor

@stephb9959 stephb9959 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you want to tell the app to use X number of cores. This should be done through k8s or anything else during deployment. Then, let the app decide how to use them. The app already knows how many cores are available. This can lead to a problem where someone changing the number of cores in k8s has no effect in the app. This would only be useful if the container would report the wrong number of cores. And for this to be useful we would also need to specify affinity. I do nothing this is a good idea. If you want we can have a chat on this.

@i-chvets
Copy link
Contributor Author

i-chvets commented Jun 4, 2024

I don't think you want to tell the app to use X number of cores. This should be done through k8s or anything else during deployment. Then, let the app decide how to use them. The app already knows how many cores are available. This can lead to a problem where someone changing the number of cores in k8s has no effect in the app. This would only be useful if the container would report the wrong number of cores. And for this to be useful we would also need to specify affinity. I do nothing this is a good idea. If you want we can have a chat on this.

Fair enough.
The issue is that app takes all cores that are available on the system, i.e. the node. It does not know about what has been requested in K8S deployment. If you are in container and look at /proc/cpuinfo you will see all the cores on the node the container is running on.
There is another approach, but it will still require similar changes, i.e. reading number of cores not from system info.
If we introduce usage of Downward API (https://kubernetes.io/docs/concepts/workloads/pods/downward-api/) this will allow the app to access what is actually being requested by K8S. I will have another experimental PR using that method and we can compare what makes better sense.

@stephb9959
Copy link
Contributor

I don't think you want to tell the app to use X number of cores. This should be done through k8s or anything else during deployment. Then, let the app decide how to use them. The app already knows how many cores are available. This can lead to a problem where someone changing the number of cores in k8s has no effect in the app. This would only be useful if the container would report the wrong number of cores. And for this to be useful we would also need to specify affinity. I do nothing this is a good idea. If you want we can have a chat on this.

Fair enough. The issue is that app takes all cores that are available on the system, i.e. the node. It does not know about what has been requested in K8S deployment. If you are in container and look at /proc/cpuinfo you will see all the cores on the node the container is running on. There is another approach, but it will still require similar changes, i.e. reading number of cores not from system info. If we introduce usage of Downward API (https://kubernetes.io/docs/concepts/workloads/pods/downward-api/) this will allow the app to access what is actually being requested by K8S. I will have another experimental PR using that method and we can compare what makes better sense.

Is this real? I am shocked. So limiting anything in k8s means nothing be default? That's a new one for me. I did not know that.

@i-chvets
Copy link
Contributor Author

i-chvets commented Jun 4, 2024

Is this real? I am shocked. So limiting anything in k8s means nothing be default? That's a new one for me. I did not know that.

Well, container will be throttled and will be killed if it goes over memory limit, for example. But nothing prevents the app from creating more threads that there are cores (which is what is actually happening). But when app is oversubscribing, it is getting throttled and all those threads do not have chance to run.
If K8S request/limit is 4 cores and deployment is pinned to the node with 4 cores all is exactly what we want. However, if there is no pinning and K8S decides to put this pod on 16 core node, the app in the pod will take them all.

@i-chvets
Copy link
Contributor Author

Closing.

@i-chvets i-chvets closed this Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants