-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support startup probes #2644
Comments
/kind support feature |
What's the underlying use case we're trying to solve? |
Hi @vincepri, currently, the use case where we want to introduce this feature is a dummy workload, called example-cnf, that our team is maintaining to define a well-developed CNF, according to CNF best practices. In that guide, which is eventually taken as reference for Red Hat CNF Certification, there's a specific section for Liveness, readiness and startup probes, referencing to this unit test that is the one used to evaluate if a CNF has a startup probe defined. So, as a summary, startup probes are needed then for passing one of the tests of this certification workflow without any issue, and currently, the operators built with operator-sdk (which is consuming from this repo) don't have this feature. |
This seems to be on the pod of the workload and not the underlying controller runtime components. https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ shows how to do this on workloads along with liveness/readiness probes. |
Hi @troy0820 , thanks for your comments! I also took a look to these docs as a reference, and in this example, yes, the startupProbe is using the same endpoint as livenessProbe, but it's because it's using this code as example, and this webserver only implements Of course, this depends on the pod of the workload, but the problem that I tried to expose here is that, in operator-sdk, if you check the code (e.g. this one), it uses the |
I don't understand what the endpoint that would serve the startup probe would do differently from the normal healthz one and a dummy workload that should pass a certification doesn't seem a particularly convincing use-case. You can point the startupProbe at the healthz endpoint. |
It's just to have a different endpoint per probe. My assumption is that each probe offered by Kubernetes (liveness, readiness and startup) has a different meaning, so they should be covered by a different endpoint. If pointing the startup probe to the healthz endpoint, for me it's more a workaround to make the startup probe to work, rather than offering a really different endpoint for it, so that the startup probe would offer a different functionality. However, based on k8s documentation, it's true that the startup probe is set to the liveness probe endpoint, so they're relying on the healthz endpoint at the end. As I said, I don't think it's a good practice if we think on the meaning of each probe, but if you don't think this would be useful, feel free to close the issue. |
PR was finally closed, so I'll close this issue as won't fix. |
Hi all! I was working in the update of some operators I'm building with operator-sdk, which is consuming from this project, and I was able to define liveness and readiness probes for the pods, feature already supported based on #297.
However, I also wanted to introduce startup probes, which are also documented in k8s docs, but there's no reference in operator-sdk project for that. I evaluated the code to try to implement that, but then I discovered controller-runtime seems not to have support for that probe yet.
I would like to ask if it's feasible to introduce this feature in the controller-runtime, probably following the same work it was done here. I can work on that if you agree on that. Thanks!
The text was updated successfully, but these errors were encountered: