okd kubeapiserver fails to start after long downtime #1535
Unanswered
RobVerduijn
asked this question in
Q&A
Replies: 2 comments
-
My first guess would be expired control-plane or node certificates, but according to Recovering from expired control plane certificates one would recover from that with the Would it be possible to create and share a |
Beta Was this translation helpful? Give feedback.
0 replies
-
I don't mind, it's a test/training cluster which is not exposed to the internet. However the unknown CA also interferes with the must gather so it's far from complete.
with a lot more ca complaints in between |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
After a prolonged period I decided to boot my okd cluster again.
Using the export KUBECONFIG= to get access to the cluster
After approving many csr pending requests most pods are up and running again
There are no more pending csr requests.
However there are 3 pods in crashloopbackoff state:
In the openshift-kube-apiserver namespace
Which are complaining about the ca.
Anybody know how I can fix this?
Rob
Beta Was this translation helpful? Give feedback.
All reactions