Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[POC] Many metadata-proxy pods stuck in CrashLoopBackOff in scale out kubemark from one TP, looks good from another TP #1289

Open
Sindica opened this issue Jan 20, 2022 · 0 comments
Assignees

Comments

@Sindica
Copy link
Collaborator

Sindica commented Jan 20, 2022

What happened:
In 2x2 scale out kubemark case:

$ ./cluster/kubectl.sh --kubeconfig test/kubemark/resources/kubeconfig.kubemark.tp-2 get pods -AT | grep -v Running
TENANT   NAMESPACE     NAME                                                         HASHKEY               READY   STATUS             RESTARTS   AGE
system   kube-system   kubernetes-dashboard-848965699-ktpbt                         1860350797080663147   0/1     CrashLoopBackOff   9          44m

$ ./cluster/kubectl.sh --kubeconfig test/kubemark/resources/kubeconfig.kubemark.tp-1 get pods -AT | grep -v Running
TENANT   NAMESPACE     NAME                                                         HASHKEY               READY   STATUS             RESTARTS   AGE
system   kube-system   kubernetes-dashboard-848965699-ww7kj                         3223766609900507535   0/1     CrashLoopBackOff   9          47m
system   kube-system   metadata-proxy-v0.1-5rm2s                                    2833962277662696395   1/2     CrashLoopBackOff   11         37m
system   kube-system   metadata-proxy-v0.1-mbbbf                                    5739051913249442897   1/2     CrashLoopBackOff   11         37m

What you expected to happen:
All pods stable in running status

@Sindica Sindica self-assigned this Feb 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant