-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cloud_cidr_allocator: don't assume gce cidrs are validated #118043
Conversation
The cloud_cidr_allocator should not assume that the cidrs allocated for the Nodes are validated and will be always single stack or dual-stack. Change-Id: Ieeda768335b2f7ea9c2007e053de46ce2c7f270a
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @wojtek-t @bowei @basantsa1989 |
@aojea: GitHub didn't allow me to assign the following users: basantsa1989. Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@aojea: GitHub didn't allow me to request PR reviews from the following users: Argh4k. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@aojea when the provider returns incorrect list of cidrs (like 2 ipv4 addresses), we do not know which one of them is the valid ipv4 address that is meant for podCIDR. In your changes here, we are assuming it is the first one, is that the expected behavior? I don't see the current behavior as a "bug". It is designed to handle dual stack and we expect at most 2 cidrs from the providers, one IPv4 and one IPv6. |
@aojea - can you respond to the above? I'm not familiar with this stuff enough and the above sounds like a legitimate concern. |
I really don't know honestly, I can't see any place that defines the "expected behavior" , if the expected behavior is "return only ipv4 and ipv6" then @basantsa1989 is right /close |
@aojea: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
that is how most of these situations are handled here AFAIK, or we fail in validation or first wins |
/reopen @basantsa1989 I'm not convinced that this behavior is correct, since there are at least 2 situations this dual-stack assumption leaves a cluster unusable and is confusing for users. |
@aojea: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I just realized this will move to the cloud-controller-amanger so maybe is not worth the effort to fix it here |
The cloud_cidr_allocator should not assume that the cidrs allocated for the Nodes are validated and will be always single stack or dual-stack.
/kind bug
/kind cleanup
Fixes: kubernetes/test-infra#29500