Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uninstalling tigera-operator leaves all resources in calico-system namespace #6210

Closed
aquam8 opened this issue Jun 10, 2022 · 5 comments
Closed

Comments

@aquam8
Copy link

aquam8 commented Jun 10, 2022

Installed Tigera-operator 3.23.1 through Helm, all was fine. I wanted to uninstall it.
Ran: helm uninstall -n tigera-operator projectcalico
It deleted the operator, but left all resources (running pods etc..) in the calico-system namespace.

Expected Behavior

I was expecting to have all resources deployed by tigera-operator deleted.

Resources created by the operator should be deleted. How are we supposed to clean up all resources otherwise?

Current Behavior

All resources in calico-system are left, including DS calico-node, Deployment calico-kube-controller etc..
I was not expected to have the calico-system resources after installing the operator through Helm.

How are we supposed to uninstall calico afterwards?

Steps to Reproduce (for bugs)

  1. kubectl create namespace tigera-operator
  2. helm repo add projectcalico https://projectcalico.docs.tigera.io/charts --version v3.23.1 --namespace tigera-operator -f values.yaml
    This is my values.yaml:
installation:
  enabled: true
  kubernetesProvider: "EKS"
  componentResources:
    - componentName: KubeControllers
      resourceRequirements:
        requests:
          memory: "64Mi"
          cpu: "40m"
        limits:
          memory: "96Mi"
          # no cpu limits
    - componentName: Node
      resourceRequirements:
        requests:
          memory: "64Mi"
          cpu: "40m"
        limits:
          memory: "128Mi"
          # no cpu limits
    - componentName: Typha
      resourceRequirements:
        requests:
          memory: "64Mi"
          cpu: "40m"
        limits:
          memory: "96Mi"
          # no cpu limits
  controlPlaneTolerations:
    - effect: NoSchedule
      operator: Exists
      key: "dedicated"

apiServer:
  enabled: false
  1. helm uninstall -n tigera-operator projectcalico
  2. You should see the remaining resources in the calico-system4.

Your Environment

  • Calico version: v3.23.1
  • Orchestrator version (e.g. kubernetes, mesos, rkt): Kubernetes/EKS 1.21
@caseydavenport
Copy link
Member

Hm, I'd expect those resources to be cleaned up by the Kubernetes owner reference GC logic since the Installation resource should be deleted. Perhaps that's not being triggered or some reason when uninstalling via helm.

@caseydavenport
Copy link
Member

xref: tigera/operator#2031

@caseydavenport
Copy link
Member

This should be resolved in v3.28, which releases in April. This was blocking it: #8586

Assuming it looks good in v3.28, we'll look at back ports for this to earlier releases as well.

@fontexD
Copy link

fontexD commented May 16, 2024

could it also be ralted to this ?

E0516 13:04:55.711075 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts "calico-cni-plugin" not found]"
E0516 13:04:57.719707 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts "calico-cni-plugin" not found]"
E0516 13:04:59.708065 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts "calico-cni-plugin" not found]"
E0516 13:05:01.706860 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts "calico-cni-plugin" not found]"
E0516 13:05:03.711757 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts "calico-cni-plugin" not found]"

i got 100000 of entries every mnute after i removed the tigera opeartor and went back to using manifest setup instead (due to my k8s-cluster got insane slow)

tried to find all ressources from it, but cant find anymore but keep getting these requests ...

is there any way to figoure out what is calling it and how to remove ?

if i re-install the operator via .yaml (not helm)

i start getting this issue

1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, service account UID (757ccb8a-5b87-40fb-adb5-8e0be3441c9c) does not match claim (4a7cd4d8-5b79-41db-8a94-702a8aa4ab39)]"

@caseydavenport
Copy link
Member

@fontexD could you please open a new issue and explain your scenario in more detail?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants