Description
Environment
- Calico/VPP version: 3.26.0
- Kubernetes version: 1.28.3
- Deployment type: On-prem VM
- Network configuration: Calico / want to do SRv6 but haven't gotten there yet
- Containerd - 1.7.8
Issue description
I'm setting up an IPv6 cluster. Each node in the cluster has two interfaces within ESXi. One interface is an ipv4 interface for OOBM, and the other serves as the main interface for kubernetes and is the uplink interface for vpp. Whenever I run "kubectl create -f calico-vpp.yaml", my node loses its IPv6 address (as the documentation states). I would expect this to be hitless if I understand the documentation properly, however anything trying to reach that IP is met with no response. As a result, all kubectl commands stop working since the API was using that address.
I have used nerdctl to exec into the container, and when executing "ip a", the uplink interface I configured shows no IPv6 address...only link local. Surprisingly the IPv4 address and interface is listed in the container, and the node has not lost that IP at all.
Is this a bug or am I doing something wrong?
To Reproduce
Steps to reproduce the behavior:
- Init kubernetes using kubeadm yaml file below:
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "::"
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
kubeletExtraArgs:
node-ip: "{{ ipv6_node_ip}}"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controlPlaneEndpoint: "{{ ipv6_node_ip}}"
apiServer:
extraArgs:
oidc-issuer-url: https://werwerr.me
oidc-client-id: ASF4Os1wJysH6uWvJV9PvyNiph4y4O84tGCHj1FZEE8
networking:
serviceSubnet: "{{ ipv6_services_subnet }}/108"
podSubnet: "{{ ipv6_pod_subnet }}64"
- kubectl edit node {{ node_name }}
- remove taint
- kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
- using kubectl create calico from following yaml file:
---
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
linuxDataplane: VPP
nodeAddressAutodetectionV4: {}
nodeAddressAutodetectionV6:
interface: {{ uplink interface }}
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
- curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.26.0/yaml/generated/calico-vpp-nohuge.yaml
- edit calico-vpp.yaml to reflect proper ipv6 services subnet, and proper uplink interface and apply via kubectl create
Expected behavior
calico-vpp pod would successfully be created, and I would be able to maintain ipv6 connectivity