-
Notifications
You must be signed in to change notification settings - Fork 1.4k
list ippools with label selector #10267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: GitHub <[email protected]>
b4d33f9
to
617d27a
Compare
@gojoy thanks for the PR! I just want to make sure I understand this correctly - the idea is that instead of specifying exact IP pool names in the CNI config,you'd be able to specify a label selector that selects IP pools to use for allocations on that node? I'm curious if you've actually had problems with the current approach, and if so what level of scale you've seen them at and what the problems were? One thing I'm a bit wary of is the interaction between this and the |
There are thousands of IP pool resources in our production Kubernetes cluster. During each IP allocation process by the CNI plugin,all IP pool resources need to be retrieved from the API server, which places pressure on the cluster's control plane,It also leads to a long time consumption in the IP allocation process(there will be multiple paginated query requests with a limit of 500.). It is recommended to utilize label selectors to filter the IP pools in advance on the API server side, thereby reducing the overhead associated with listing operations. |
@gojoy got it, and to make sure I understand the UX you're proposing:
Is that right? |
yes, that's right. This is the way we can think of to optimize the performance. |
@gojoy I'm curious, what is your reason for having thousands of pools? If your cluster huge? Is it just fragmentation (you're using a lot of small pools to get many IPs) or something else? |
Our cluster uses the container network mode of BGP+underlay. Therefore, each node has at least one ippools resource (usually a subnet with a mask of 26). When there are thousands of nodes in the cluster, the number of ippools resources will also be extremely large. |
@gojoy normally for that I'd expect a single IP pool - Calico already splits IP pools into /26 blocks that it assigns to nodes by default, so you shouldn't need to use many IP pools unless you want precise control of which IP subnets are assigned to which nodes or you have a large discontinuous IP space for pods in the cluster. |
Ideally, a single IP pool in the cluster would suffice. However,In our scenario, due to the existence of multiple clusters of different sizes and the dynamic nature of the cluster size, in order to use IP resources more precisely, we have to utilize multiple ippool resources. |
Description
Listing ippools with label selector when assigning ip in the IPAM. When there are a large number of ippools in the k8s cluster, this optimization can lead to a performance improvement.
Related issues/PRs
Todos
Release Note
Reminder for the reviewer
Make sure that this PR has the correct labels and milestone set.
Every PR needs one
docs-*
label.docs-pr-required
: This change requires a change to the documentation that has not been completed yet.docs-completed
: This change has all necessary documentation completed.docs-not-required
: This change has no user-facing impact and requires no docs.Every PR needs one
release-note-*
label.release-note-required
: This PR has user-facing changes. Most PRs should have this label.release-note-not-required
: This PR has no user-facing changes.Other optional labels:
cherry-pick-candidate
: This PR should be cherry-picked to an earlier release. For bug fixes only.needs-operator-pr
: This PR is related to install and requires a corresponding change to the operator.