-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cert-manager is creating/using invalid certs #1264
Comments
I've discovered that I have the same problem. Today the creation of a new NNCP failed in my OCP 4.16 cluster:
The nmstate-cert-manager pod is in a tight loop logging this:
Kubernetes nmstate operator 4.16.0-202409111235 nmstate-cert-manager image: nmstate-webhook image: |
I've noticed that the nmstate-ca secret was being recreated several times per seconds, causing a high CPU usage. After removing the operator, deleting the openshift-nmstate namespace and reinstalling it again, now works. One thing that I've observed is that the nmstate-cert-manager pod no longer exists. Maybe related to #1263 ? |
We have the same issue with the same version of nmstate in OpenShift. We have reverted to last version (4.16.0-202409032335) and the issue is not present. |
can confirm this. |
Looks like cleaning up the
Instead of using the namespace from the returned
WDYT @qinqon ? |
@bverschueren @seb2020 we are reverting this at 4.16, that was a mistake, then after proper testing we will cherry-pick but just at 4.17. |
I am closing this since we have revert the behaviour |
/reopen with |
@goern: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@goern latest 4.16 was reverted and 4.17 is fixed for upgrade, it should work now. |
Closing, since it got fixed at proper place. |
What happened:
What you expected to happen:
valide certificates?!
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
I'm using registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:00eb91c1ff12cbf5c1cf0dfbc5476ff2fc78ad24c62e1fb3f1352d9bf51cc980 but this issue seems to be rooted in the upstream project.
Environment:
NodeNetworkState
on affected nodes (usekubectl get nodenetworkstate <node_name> -o yaml
):ok
Problematic
NodeNetworkConfigurationPolicy
:n/a
kubernetes-nmstate image (use
kubectl get pods --all-namespaces -l app=kubernetes-nmstate -o jsonpath='{.items[0].spec.containers[0].image}'
):registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:00eb91c1ff12cbf5c1cf0dfbc5476ff2fc78ad24c62e1fb3f1352d9bf51cc980%
NetworkManager version (use
nmcli --version
):n/a
Kubernetes version (use
kubectl version
):n/a
The text was updated successfully, but these errors were encountered: