Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tigera Operator unable to uninstall cleanly #2031

Closed
prikesh-patel opened this issue Jun 21, 2022 · 38 comments
Closed

Tigera Operator unable to uninstall cleanly #2031

prikesh-patel opened this issue Jun 21, 2022 · 38 comments

Comments

@prikesh-patel
Copy link

Expected Behavior

When uninstalling the Tigera Operator helm chart, it should be able to remove all resource and clean itself up. Then we should be able to reinstall Tigera Operator without having to restart the nodes in an EKS cluster.

Current Behavior

Uninstalling the helm release locally (using helm delete -n tigera-operator tigera-operator) allows the chart to be removed successfully. But when reinstalling the same chart, the tigera-operator pod (keeps crashing) and no calico-node pods get created. The following are logs in the tigera-operator container.

2022/06/20 15:13:44 [INFO] Version: v1.27.5
2022/06/20 15:13:44 [INFO] Go Version: go1.17.9b7
2022/06/20 15:13:44 [INFO] Go OS/Arch: linux/amd64
2022/06/20 15:14:04 [ERROR] Get "https://XXX.gr7.eu-west-1.eks.amazonaws.com:443/api?timeout=32s": dial tcp: lookup 1D36CABDB9FD90B6E9FB961600E4045D.gr7.eu-west-1.eks.amazonaws.com on 10.52.X.X:53: read udp 100.65.X.X:50421->10.52.X.X:53: i/o timeout

This breaks all network connectivity in the cluster. Any new images can't be pulled.

When deleting the Tigera Operator helm release through terraform, the resource is unable to delete and times out. This may be due to this finaliser on the Installation resource, which installs calico.

finalizers:
  - tigera.io/operator-cleanup

This problem has begun occurring on Tigera Operator helm chart v2.2.0 (Tigera image v1.27.0 & Calico image v3.23.0).

Possible Solution

A temporary workaround is to restart the nodes. The calico-node pods then begin running on these new nodes, and the tigera-operator pod starts running without any restarts.

Steps to Reproduce (for bugs)

  1. Install chart using command helm install tigera-operator stevehipwell/tigera-operator -n tigera-operator --version 2.2.4 --values tigera-values.yaml. The following values can be used
dnsPolicy: Default
env:
  - name: KUBERNETES_SERVICE_HOST
    value: XXX.gr7.eu-west-1.eks.amazonaws.com
  - name: KUBERNETES_SERVICE_PORT
    value: "443"
hostNetwork: false
installation:
  enabled: true
  spec:
    cni:
      type: AmazonVPC
    componentResources:
      - componentName: Node
        resourceRequirements:
          limits:
            cpu: 1000m
            memory: 256Mi
          requests:
            cpu: 50m
            memory: 256Mi
      - componentName: Typha
        resourceRequirements:
          limits:
            cpu: 1000m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 128Mi
      - componentName: KubeControllers
        resourceRequirements:
          limits:
            cpu: 1000m
            memory: 64Mi
          requests:
            cpu: 100m
            memory: 64Mi
    controlPlaneNodeSelector:
      kubernetes.io/os: linux
      lnrs.io/tier: system
    controlPlaneTolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - key: system
        operator: Exists
    kubernetesProvider: EKS
    nodeMetricsPort: 9091
    nodeUpdateStrategy:
      rollingUpdate:
        maxUnavailable: 25%
      type: RollingUpdate
    registry: quay.io/
    typhaAffinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: lnrs.io/tier
                  operator: In
                  values:
                    - system
    typhaMetricsPort: 9093
    variant: Calico
priorityClassName: ""
rbac:
  create: true
resources:
  limits:
    cpu: 1000m
    memory: 512Mi
  requests:
    cpu: 50m
    memory: 512Mi
serviceAccount:
  create: true
serviceMonitor:
  additionalLabels:
    monitoring-platform: "true"
  enabled: true
tolerations:
  - key: system
    operator: Exists
  1. Delete chart by running helm delete -n tigera-operator tigera-operator.
  2. Re-install chart using the same command in step 1.

Context

We are unable to destroy EKS clusters through terraform, as it times out when uninstalling the Tigera Operator helm release.

Your Environment

  • Operating System and version: Amazon EKS, v1.21.
  • Link to your project (optional):

This is possibly related to projectcalico/calico/issues/6210

@tmjd
Copy link
Member

tmjd commented Jun 24, 2022

The chart stevehipwell/tigera-operator is not supported by the tigera/operator team. But I expect that this would probably be an issue with the official helm chart also.
I'm guessing this is because the operator deployment is being removed, so the finalizer it adds, is not being removed. I think first removing the operator CustomResource (CR) (the Installation "default" resource) and allowing the operator to remove the finalizer then removing the operator I believe would work.

@caseydavenport do you know what the correct way to handle ensuring the operator deployment isn't removed before the Installation CR is since it puts a finalizer on the CR? Should the operator be putting a finalizer on itself (the operator deployment) also? That seems like a bad idea to me, if that was the right way then it would probably need to put a finalizer on all the resources it uses to prevent helm from removing them before it could clean up too.

Maybe there is a helm chart feature that will delete the Installation CR and ensure it is removed before deleting everything else? (I am doubtful of this since I don't think helm has much in the way of being able to sequence things.)

Maybe we just need to remove the use of the finalizer which I added in #1710, though from what was trying to be fixed it seems like the same issue would be hit in this use case too.

@stevehipwell
Copy link
Contributor

@tmjd my chart will essentially render to the same components as the official chart so I suspect it could be replicated that way too.

Helm handles resource order for core resources but it won't understand Cars. Also this issue would be present for plain yaml installations too.

If there isn't an official pattern (I've not looked yet) I think that the Istio operator uses finalizers and AFAIK that works correctly, so it might be worth looking how they do it. If neither of these helps then maybe the operator can have a termination wait arg added so it can process any CRs deleted at the same time before terminating.

@caseydavenport
Copy link
Member

xref: projectcalico/calico#6210

@caseydavenport
Copy link
Member

If neither of these helps then maybe the operator can have a termination wait arg added so it can process any CRs deleted at the same time before terminating.

This would have been my first thought as well - add a termination grace period, have the operator handle SIGTERM and rather than exit immediately, delay exiting until it is confident it doesn't need to remove any more finalizers or do any further cleanup. In normal operation that should happen pretty quickly. If we hit the end of the grace period (maybe 60s or so) then we'll get a SIGKILL and be forced to shutdown.

@stevehipwell
Copy link
Contributor

@caseydavenport +1! The default pod grace period is 30s, which should be enough time for cleaning up a standard Calico installation I assume? Say wait 15s after SIGTERM and exit of no deletes pending. Obviously this can be extended in the chart if longer is needed.

@tmjd
Copy link
Member

tmjd commented Jun 27, 2022

Handling termination sounds reasonable but I'm not sure that will work or at least wouldn't be sufficient.
First, I think it will be tricky to be confident that any needed cleanup has been done. Especially with the eventually consistent nature of K8s. I think it is possible the delete of the operator is triggered before the Installation CR delete was issued meaning the operator wouldn't know anything was needed (or would need) to be cleaned up.
Second, a grace period won't stop the rest of the operator resources from being cleaned up, meaning any actions it needed to do would possibly be blocked because its ClusterRole or binding may have been removed by that time.

I took a look at Istio and they seem to do the same thing the operator does with respect to termination handling. Looking a bit more at Istio it doesn't look like the Istio CRs are created with the Istio helm chart, AFAICT (but there are several charts there so I could have missed it). Also looking at some of the Istio documentation suggests that the IstioOperator resource is created separate from the helm chart install. Would a simple solution here be to set installation.enabled in the tigera/operator chart to false and then create the Installation CR separately?

@stevehipwell
Copy link
Contributor

@tmjd CRDs shouldn't be installed by Helm, especially not the chart which uses them; this report is likely missing the fact that skip-crds was used.

RE removing the CR from the chart, this would still require a system which isn't eventually consistent to be guaranteed to work.

What exactly is the purpose of the finalizer? I think the operator could wait/sleep to clear the finalizer as resources it's dependent on wouldn't be removed until it was done.

@caseydavenport
Copy link
Member

I think it is possible the delete of the operator is triggered before the Installation CR delete was issued meaning the operator wouldn't know anything was needed (or would need) to be cleaned up

We have control over what the operator thinks, so we might be able to get this right. e.g., check if it's just the pod coming down or if the entire operator deployment has been deleted - if so, wait longer, etc. I think we have options, but agree we'd need to think about the precise behavior (and perhaps add a bit of extra waiting time to normal rolling update behavior, which isn't ideal).

CRDs shouldn't be installed by Helm, especially not the chart which uses them

That doc doesn't seem to say explicitly that you shouldn't use Method 1, but does provide some caveats / tradeoffs.

Second, a grace period won't stop the rest of the operator resources from being cleaned up, meaning any actions it needed to do would possibly be blocked because its ClusterRole or binding may have been removed by that time.

Hm, yeah that could be a problem if helm doesn't order the deletes. Is helm not smart enough to wait for termination of the deployment before deleting the resources that the deployment depends on?

@stevehipwell
Copy link
Contributor

stevehipwell commented Jun 27, 2022

That doc doesn't seem to say explicitly that you shouldn't use Method 1, but does provide some caveats / tradeoffs.

It doesn't (anymore), but although it works for day 0 (kind of) it doesn't allow for CRD updates so is functionally useless for a project like Tigera Operator where the CRDs change.

Hm, yeah that could be a problem if helm doesn't order the deletes. Is helm not smart enough to wait for termination of the deployment before deleting the resources that the deployment depends on?

Helm understands built in resource types and ordering so should be good on that front. I also think you can't delete a resource which is being used by a pod.

@caseydavenport
Copy link
Member

Helm understands built in resource types and ordering so should be good on that front. I also think you can't delete a resource which is being used by a pod.

That's good to know. I am pretty sure that you can delete loosely coupled resources though - like a ClusterRolebinding - that would impact a pod's ability to access the API (at least on most clusters without additional admission checks)

@stevehipwell
Copy link
Contributor

That's good to know. I am pretty sure that you can delete loosely coupled resources though - like a ClusterRolebinding - that would impact a pod's ability to access the API (at least on most clusters without additional admission checks)

That makes sense, I was more thinking of the ServiceAccount which is actually attached to the pod. So for a Helm installation it should work due to the kind sorter (I think --wait might be needed too), but for other tools there would need to be a manual ordering/wait constraint if the equivalent is not implemented (Argo & Flux might have this)?

@stevehipwell
Copy link
Contributor

@caseydavenport has there been any progress on this?

@caseydavenport
Copy link
Member

@stevehipwell sorry, haven't made any progress on this one yet.

@rgson
Copy link

rgson commented Aug 9, 2022

After uninstalling the operator with Helm and cleaning up the rest with kubectl, I'm stuck with a calico-node ServiceAccount in the calico-system namespace that refuses to be deleted. How can I delete it?

Edit: Solved. I had to patch away the finalizer.

kubectl patch -n calico-system ServiceAccount/calico-node --type json \
  --patch='[{"op":"remove","path":"/metadata/finalizers"}]'

@muralidharbm1411
Copy link

I still see a problem with deletion of
kubectl get ns
NAME STATUS AGE
calico-apiserver Terminating 45d

calico-apiserver pod/calico-apiserver-8d9c6569-9dmw8 0/1 Terminating

The above pod i'm unable to delete , anyone has faced such issue?
version of calico - 3.22.1

Any pointers to good way of cleaning up calico clearly?

@rockc2020
Copy link

Add some my experiences for uninstalling calico tigera-operator in our EKS cluster.

  1. run helm uninstall to uninstall tigera-operator but only tigera-operator get uninstalled and other resource in namespace calico-system still exist. (same issue with this thread)
  2. run helm install again to install tiger-operator, then all the resources in namespace calico-system get cleaned, but only tigera-operator is there in the cluster.
  3. run helm uninstall to uninstall tigera-operator again, and then everything is gone.

That's what I did for uninstalling calico as a workaround and I haven't dug into a bit for the detailed behaviors of those yet. Not sure if anybody else doing this before.

@ib-ak
Copy link

ib-ak commented Dec 18, 2022

installations.operator.tigera.io/default blocks uninstallation in my case.

kubectl patch -n calico-system installations.operator.tigera.io/default  --type json \
  --patch='[{"op":"remove","path":"/metadata/finalizers"}]'

@willzhang
Copy link

willzhang commented Feb 7, 2023

same problems with ansible auto install and uninstall , i can not auto uninstall calico with ansible

@doughgle
Copy link

uninstall-install-uninstall worked for me to get a renamed release in the same namespace.

Observations after the second uninstall

CRDs still exist

/home/ssm-user$ kubectl api-resources | grep tig
apiservers                                             operator.tigera.io/v1                  false        APIServer
imagesets                                              operator.tigera.io/v1                  false        ImageSet
installations                                          operator.tigera.io/v1                  false        Installation
tigerastatuses                                         operator.tigera.io/v1                  false        TigeraStatus

Installations are gone

/home/ssm-user$ kubectl get installations.operator.tigera.io -A
No resources found

Calico-apiserver service account remains

/home/ssm-user$ kubectl -n calico-apiserver get all,serviceaccounts 
NAME                              SECRETS   AGE
serviceaccount/calico-apiserver   1         6d20h
serviceaccount/default            1         6d20h

calico-typha service account remains

/home/ssm-user$ kubectl -n calico-system get all,serviceaccounts 
NAME                          SECRETS   AGE
serviceaccount/calico-typha   1         6d21h
serviceaccount/default        1         6d21h

Clusterrole and clusterrolebindings are gone

/home/ssm-user$ kubectl get clusterrole,clusterrolebindings.rbac.authorization.k8s.io | grep cali

/home/ssm-user$ kubectl get clusterrole,clusterrolebindings.rbac.authorization.k8s.io | grep tig

tigera-operator namespace is clear

/home/ssm-user$ kubectl get all,sa -n tigera-operator 
NAME                     SECRETS   AGE
serviceaccount/default   1         6d21h

@willzhang
Copy link

installations.operator.tigera.io/default blocks uninstallation in my case.

kubectl patch -n calico-system installations.operator.tigera.io/default  --type json \
  --patch='[{"op":"remove","path":"/metadata/finalizers"}]'

good way ,it delete all other resource.

@vijay-veeranki
Copy link

vijay-veeranki commented Mar 2, 2023

+1

For us, installations.operator.tigera.io/default finalizers block tigera helm uninstallation and ServiceAccount/calico-node finalizers block deletion of the calico_system namespace

Deleting the installations.operator.tigera.io default before destroying the helm tigera, does remove installations and also removes the finalizers on calico-system ServiceAccount/calico-node and destroys cleanly.

resource "helm_release" "tigera_calico" {
  name       = "tigera-calico-release"
  chart      = "tigera-operator"
  repository = "https://projectcalico.docs.tigera.io/charts"
  namespace  = "tigera-operator"
  timeout    = 300
  version    = "3.25.0"

  depends_on = [
    kubernetes_namespace.tigera_operator,
    kubernetes_namespace.calico_system,
    kubernetes_namespace.calico_apiserver
    ]
  set {
    name  = "installation.kubernetesProvider"
    value = "EKS"
  }
}

resource "null_resource" "remove_finalizers" {
  depends_on = [helm_release.tigera_calico]

  provisioner "local-exec" {
    when    = destroy
    command = <<-EOT
      kubectl delete installations.operator.tigera.io default
    EOT
  }

  triggers = {
    helm_tigera = helm_release.tigera_calico.status
  }
}

@caseydavenport
Copy link
Member

Example PR with one approach for resolving this: #2662

@SamuZad
Copy link

SamuZad commented May 25, 2023

Fun fact: if you run kubectl delete installation default prior to helm uninstall, it deletes the default calico installation, which is the root of the issue 😄

@stevehipwell
Copy link
Contributor

@SamuZad although you're technically correct (well assuming enough of a pause between the kubectl delete and helm uninstall); the point here is that installation method can't also support uninstallation which isn't great UX and means it isn't suitable for declarative IaC.

@SamuZad
Copy link

SamuZad commented May 30, 2023

@stevehipwell I don't disagree in any way, shape or (indeed) form!

I was leaving that there so that when people encounter this issue, they can just run kubectl delete installation default, rather than the kubectl patch workarounds listed above 😄

@stevehipwell
Copy link
Contributor

@SamuZad the kubectl patch workaround if for people who've had the issue, as they no longer have a working operator and their installation is blocked on the finalizer. Your advice is valid for people who haven't encountered this issue yet and are in a position to run arbitrary commands against their cluster.

@SamuZad
Copy link

SamuZad commented May 30, 2023

@stevehipwell I am not here to argue (and to be honest, I'm not sure why your tone is as argumentative and unwelcoming as it is), but that's actually (and technically) incorrect.

I say this as somebody who encountered the issue and ended up with a halfway-deleted installation. I simply wanted to leave an alternative (and potentially cleaner) solution for others in the future until this is fixed 🙂

Both solutions require for someone to have "encountered the issue and be in a position to run arbitrary commands against their cluster", as you put it.

Since helm uninstall does not uninstall the CRD's, so long as the CRDs are not manually deleted, kubectl delete installation default will get the job done (since the 'installation' is actually defined by a CRD, as I'm sure you know). This deletion will implicitly remove the finalizers from the undeleted resources.

@stevehipwell
Copy link
Contributor

@stevehipwell I am not here to argue (and to be honest, I'm not sure why your tone is as argumentative and unwelcoming as it is), but that's actually (and technically) incorrect.

@SamuZad I'm unsure as to how your comment helps support people with this issue or offers a new take on the situation, it looks a lot like a drive-by? It's already well understood that this issue is caused by the installation resource being deleted at the same time or after the operator, so obviously manually uninstalling it first will work but the scope of the issue is explicitly about Helm uninstall working correctly.

Since helm uninstall does not uninstall the CRD's, so long as the CRDs are not manually deleted, kubectl delete installation default will get the job done (since the 'installation' is actually defined by a CRD, as I'm sure you know). This deletion will implicitly remove the finalizers from the undeleted resources.

I don't think you'll find it will as that's not how finalizers work (Tigera Operator is the controller in the context of the docs and it's no longer running), have you actually tested this?

Once you're uninstalled the Helm chart you're left with an installation with a finalizer but nothing running in cluster to remove the finalizer which is where the patch commands above come in; your solution will not work as kubectl delete installation default is blocked by the finalizer.

@SamuZad
Copy link

SamuZad commented May 30, 2023

@stevehipwell you're right 🙂 I was originally working in a cluster that has a custom workaround for stuck finalizers, doing so on a vanilla cluster meant I couldn't delete the installation

I'm happy to delete all my comments in order to keep the issue thread clean 🙂

@yardenw-terasky
Copy link

installations.operator.tigera.io/default blocks uninstallation in my case.

kubectl patch -n calico-system installations.operator.tigera.io/default  --type json \
  --patch='[{"op":"remove","path":"/metadata/finalizers"}]'

Had the same issues, this helps.

@Rambatino
Copy link

There is a big risk with removing finalizers that you end up with orphaned resources that exist on the node but aren't visible to k8s. So be warned.

@stevehipwell
Copy link
Contributor

@Rambatino in this case it's explicitly a stuck resource with no associated controller to manage it. I agree that removing finalizers is generally a bad idea but in this case, given the cluster is likely in the process of being destroyed, it's the lesser of two evils. That said if you're not destroying the cluster and are removing the finalizer to delete the installation you're going to need to either replace all the nodes or manually fix them.

@rdxmb
Copy link

rdxmb commented Jul 3, 2023

kubectl delete installation default did also not work in my case.

Patching like

kubectl patch -n calico-system installations.operator.tigera.io/default  --type json \
  --patch='[{"op":"remove","path":"/metadata/finalizers"}]'
installation.operator.tigera.io/default patched

does not remove any calico pods.

The tigera-operator logs

{"level":"error","ts":"2023-07-03T08:39:51Z","logger":"controller_apiserver","msg":"Installation not found","Request.Namespace":"","Request.Name":"apiserver","reason":"ResourceNotFound","error":"Installation.operator.tigera.io \"default\" not found","stacktrace":"github.com/tigera/operator/pkg/controller/status.(*statusManager).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/status/status.go:406\ngithub.com/tigera/operator/pkg/controller/apiserver.(*ReconcileAPIServer).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/apiserver/apiserver_controller.go:246\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}
{"level":"error","ts":"2023-07-03T08:39:51Z","msg":"Reconciler error","controller":"apiserver-controller","object":{"name":"apiserver"},"namespace":"","name":"apiserver","reconcileID":"43f3a602-84fc-46e7-9836-0da5d1837601","error":"Installation.operator.tigera.io \"default\" not found","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}

Any workaround here?

Edit:

In the end the calico-kube-controllers suck with 2023-07-03 08:45:08.439 [ERROR][1] client.go 295: Error getting cluster information config ClusterInformation="default" error=connection is unauthorized: Unauthorized

What is the correct order to uninstall the complete manifest? I've deployed via kubectl

without helm.

@caseydavenport
Copy link
Member

What is the correct order to uninstall the complete manifest? I've deployed via kubectl

You should uninstall custom-resources.yaml first, and then tigera-operator.yaml - just the opposite of install ordering.

If you uninstall the operator first, then it won't be running in the cluster in order to clean up after itself when custom-resources.yaml is deleted.

@tmjd tmjd mentioned this issue Aug 17, 2023
@sheh1000
Copy link

sheh1000 commented Mar 7, 2024

Can someone confirm the correct procedure for uninstalling calico?
I was installed with official instruction as Operator, with kubectl create -f

Uninstall procedure I'm going to perform:

  1. delete default installation first, as recommended here
kubectl delete installation default
  1. delete resources in this order:
kubectl delete -f custom-resources.yaml
kubectl delete -f tigera-operator.yaml
  1. reboot nodes

@caseydavenport
Copy link
Member

The stuck finalizers issue should be resolved in v3.28, which releases in April. This was blocking it: projectcalico/calico#8586

Assuming it looks good in v3.28, we'll look at back ports for this to earlier releases as well.

@KhasDenis
Copy link

Can someone confirm the correct procedure for uninstalling calico? I was installed with official instruction as Operator, with kubectl create -f

Uninstall procedure I'm going to perform:

  1. delete default installation first, as recommended here
kubectl delete installation default
  1. delete resources in this order:
kubectl delete -f custom-resources.yaml
kubectl delete -f tigera-operator.yaml
  1. reboot nodes

Is reboot really needed ? And what is meant by that node rotation ?

@tmjd
Copy link
Member

tmjd commented Mar 26, 2024

Is reboot really needed ? And what is meant by that node rotation ?

It would be needed to clean out the rules that would have been left by calico on each of the nodes, I would expect a reboot of the nodes to make that happen.

If the Calico CNI was being used, then a node reboot on its own wouldn't be sufficient to have something functional again. The CNI config would probably need to be cleared out and then of course a new CNI plugin installed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests