-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support IPv6 loadbalancer services #179
Comments
This is a reasonable request. There are a few parts to this, though; it is a bit of a lift.
The I think @detiber or @displague is most likely to know? |
Edit: kubernetes/enhancements#1992 tracks the addition of |
I assume for now people will need to define one service with |
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#servicespec-v1-core The service spec has a few fields that jump out:
The More details and examples at https://kubernetes.io/docs/concepts/services-networking/dual-stack/ In older versions:
|
@flokli I just spent time going through that enhancement proposal. As far as I can tell, it looks like I think that the simplest first step here is to support either ipv6 or ipv4 in the CCM. Let it look at the requested family and get bgp for that specific family. In the future, we can look at single Service dual-stack support. Is there an easier way? |
Yes, the KEP seems to be stalled. Thanks for your follow-up question there!
Yeah, that's what I meant. Essentially, take a look at the
Yeah, once there's one or another way to "annotate" multiple load balancer IPs (be it annotations or fields). For the time being, people looking to expose something dualstacked can just deploy two services, one for each address family (disappointing hack, but a workaround until there's a better way). |
I found the following links helpful for future consideration of dual-stack support. https://kubernetes.io/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack https://github.com/kubernetes/cloud-provider-gcp/pull/268/files |
Hey @displague, hope you're well. Has there been any revised discussion around IPv6 support there? The Sidero Labs team is working on a new dual-stack cluster, but we're unable to create dual stack services with nginx-ingress. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
/reopen |
@cprivitere: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/triage accepted |
Right now, this only sets up IPv4 BGP in the API: https://github.com/equinix/cloud-provider-equinix-metal/blob/master/metal/bgp.go#L218
It should set up both IPv4 and IPv6 peers (behaviour could possibly be made configurable).
MetalLB seems to support IPv6 sufficiently enough, (both multiprotocol BGP and IPv6 LoadBalancer services).
The text was updated successfully, but these errors were encountered: