You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The K3s CNI binaries are installed alongside the rest of the bundled userspace, and the managed containerd config is updated on restart to point at the current bin dir under /var/lib/rancher/k3s/data/XXX/bin. This makes it difficult to install custom CNI plugins, as the path used by containerd changes every time k3s is upgraded.
This was an obstacle to our packaging Multus with K3s:
The thing is that Cilium installs itself in /var/lib/rancher/k3s/data/[long_id]/bin and, following a k3s upgrade, the cni gets broken as the cluster can't find the cilium-cni binary anymore, and I need to restart the cilium daemonset in order for the cluster to work again. This is why I was looking at changing the cni binary location. Otherwise, I may need to use a clusterPolicy with something like kyverno to check for a kubernetes upgrade and then restart the pods accordingly, which isn't ideal.
The text was updated successfully, but these errors were encountered:
The K3s CNI binaries are installed alongside the rest of the bundled userspace, and the managed containerd config is updated on restart to point at the current bin dir under
/var/lib/rancher/k3s/data/XXX/bin
. This makes it difficult to install custom CNI plugins, as the path used by containerd changes every time k3s is upgraded.This was an obstacle to our packaging Multus with K3s:
This has been complained about on Users Slack:
The text was updated successfully, but these errors were encountered: