Skip to content

CNF-13731: Cert Manager HTTP01 Proxy #1773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
216 changes: 216 additions & 0 deletions enhancements/cert-manager-proxy/http01_challenge_proxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
---
title: http01-challenge-cert-manager-proxy
authors:
- "@sebrandon1"
reviewers:
- "@TrilokGeer"
- "@swagosh"
approvers:
- "@tkashem"
- "@deads2k"
- "@derekwaynecarr"
api-approvers:
- "@JoelSpeed"
creation-date: 2025-03-28
last-updated: 2025-03-28
status: implementable
tracking-link:
- https://issues.redhat.com/browse/CNF-13731
---

# HTTP01 Challenge Proxy for Cert Manager

![HTTP01 Challenge Proxy Diagram](http01_challenge.png)

## Summary

For baremetal platforms only. Provide a way for cert-manager to complete http01 challenges against API endpoints (such as api.cluster.example.com) similar to the way it handles certificate challenges for other OpenShift Ingress endpoints.

## Motivation

Cert manager can be used to issue certificates for the OpenShift Container Platform (OCP) endpoints (e.g., console, downloads, oauth) using an external ACME Certificate Authority (CA). These endpoints are exposed via the OpenShift Ingress (`*.apps.cluster.example.com`), and this is a supported and functional configuration today.

However, cluster administrators often want to use Cert Manager to issue custom certificates for the API endpoint (`api.cluster.example.com`). Unlike other endpoints, this API endpoint is not exposed via the OpenShift Ingress. Depending on the OCP topology (e.g., SNO, MNO, Compact), it is exposed directly on the node or via a keepalive VIP. This lack of management by the OpenShift Ingress introduces challenges in obtaining certificates using an external ACME CA.

The gap arises due to how the ACME HTTP01 challenge works. The following scenarios illustrate the challenges:

1. **Standard Clusters**: The API VIP is hosted on the control plane nodes which do not host an OpenShift Router. The http01 challenge, which is directed at the API VIP (the IP where `api.cluster.example.com` DNS resolves), will not hit an OpenShift Router and thus not reach the challenge response pod started by Cert Manager.
2. **Compact Clusters**: The node hosting the API VIP may also host an OpenShift Router. If no router is present on the node hosting the VIP, the challenge will fail.
3. **SNO (Single Node OpenShift)**: The same nodes host both the ingress and API components. Both FQDNs (`api` and wildcard) resolve to the same IP, making the challenge feasible.

To address this gap, a small proxy was developed. This proxy runs on the cluster as a DaemonSet (control plane nodes) and then adds iptables rules to the nodes and ensures that connections reaching the API on port 80 are redirected to the OpenShift Ingress Routers. The proxy implementation creates a reverse proxy to the apps VIP and uses `nftables` to redirect traffic from `API:80` to `PROXY:8888`.

- **Proxy Code**: [GitHub Repository](https://github.com/mvazquezc/cert-mgr-http01-proxy/tree/main)
- **Deployment Manifest**: [Manifest Link](https://github.com/mvazquezc/cert-mgr-http01-proxy/blob/main/manifests/deploy-in-ocp.yaml)

This enhancement aims to provide a robust solution for managing certificates for the API endpoint in baremetal environments.

### User Stories

1. **As a cluster administrator**, I want to manage custom certificates for the API endpoint (`api.cluster.example.com`) using an external ACME CA, so that I can ensure secure communication for my cluster's API.
2. **As a cluster administrator on a baremetal platform**, I want a reliable solution to handle HTTP01 challenges for the API endpoint, even when the endpoint is not managed by OpenShift Ingress, so that I can avoid manual workarounds.
3. **As a developer**, I want a simple deployment mechanism for the HTTP01 challenge proxy, so that I can easily integrate it into my existing cluster setup.

### Goals

- Provide a reliable mechanism for Cert Manager to complete HTTP01 challenges for the API endpoint (`api.cluster.example.com`) in baremetal environments.
- Ensure compatibility with various OpenShift topologies, including Standard Clusters, Compact Clusters, and SNO.
- Minimize operational complexity by using a DaemonSet-based deployment and `nftables` for traffic redirection.

## Proposal

The HTTP01 Challenge Proxy will be implemented via DaemonSet running on the cluster. It will:

- Redirect HTTP traffic from the API endpoint (`api.cluster.example.com`) on port 80 to the OpenShift Ingress Routers.
- Use `nftables` for traffic redirection from `API:80` to `PROXY:8888`.
- Be deployed using a manifest that includes all necessary configurations.

The proxy will ensure compatibility with various OCP topologies, including SNO, MNO, and Compact clusters, addressing the challenges of HTTP01 validation for the API endpoint.

### API Extensions

A new CR type may be created and can be applied to clusters. This new typed will be stored in the [openshift/api](https://github.com/openshift/api) repo.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is a new CR required?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something that I discussed as a possibility with @mvazquezc that it would be possibly a new CR that you would apply to get the proxy applied to the nodes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What group will this new CRD be created under? All new APIs will need to be tied to a TPNU feature-gate and start as v1alpha1.

What is this API going to look like?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started a rough try at creating a new type here under v1alpha1 but haven't turned it into a PR yet because I figured the enhancement would be required first.

Should I create a PR there as well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sebrandon1 I think it is reasonable to link to a PR in the openshift/api repo as part of the enhancement for review purposes.

That being said, if you do link to a PR please share the YAML representation of how users would interact with the API as part of this section

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR opened here: openshift/api#2318

And I will add an example CR in this section.

Copy link
Member Author

@sebrandon1 sebrandon1 May 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CR has been added. I modified the spec section per a comment from @imiller0 a while ago, which basically makes it an empty section.

EDIT: I should clarify, because the proxy is either on or off just the existence of the CR would be the enable/disable switch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example CR looks like a namespaced resource.

Does this mean that I can have multiple proxies running? What happens in that case?

Regarding empty spec, I'm a little skeptical of this. There is nothing that users may want to be able to configure for the challenge proxy? Nothing that we would require some user input for?

If there truly is nothing to include in a spec, is there a better location for an enable/disable-like API to be added?


Potential Example of a CR:

```
apiVersion: network.openshift.io/v1alpha1
kind: HTTP01ChallengeProxy
metadata:
name: example-http01challengeproxy
namespace: default
spec:
# Add fields here to specify the desired state of the HTTP01ChallengeProxy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The motivation and non-goals states only one endpoint, api.cluster.example.com, is scoped for this work. Forward-thinking, is there any other endpoints that users may want to be able to proxy that we would want to support?

What other things might a user want to configure here, either now or long-term?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the only user-exposed endpoint exposed by the platform that isn't routed through the ingress controller, I don't think we need to support other endpoints.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I noticed is that there is specification around the proxy being hardcoded to port 8888 - why this port specifically? How should users handle a scenario where something else that is critical to their system needs to run on port 8888? Would it be worth making the port a configurable value?

status:
conditions:
- type: Ready
status: "True"
lastTransitionTime: "2025-05-12T00:00:00Z"
reason: "Initialized"
message: "HTTP01ChallengeProxy is ready"
```

### Implementation Details/Notes/Constraints
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments with some further questions on how this is being done, but I'd like to see a more descriptive section here that explains how certain aspects are going to be implemented.

For example, does this need to be a core component of the OpenShift payload that is shipped with every cluster? If so, you may want to consider the cluster operator approach: https://github.com/openshift/enhancements/blob/master/dev-guide/operators.md

If not, could this be implemented as an operator installable through OLM?

Should multiple instances of a proxy be allowed to run? If so, what are some issues that may be encountered when running multiple instances? How do you plan to mitigate these issues?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answering only the last part, will let other chime in for the first part.

No, multiple instances of a proxy are not allowed to run. There will be port collisions since the port used (8888) will be listening on the host network namespace. This can be mitigated by checking if the port is in use when initializing the proxy. During upgrades, etc, the idea is use the recreate strategy so the existing proxy gets deleted before the new one gets created.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you plan to limit that only a singular instance of a proxy should run? Is this one proxy per node? one proxy per cluster?

If there is risk of port collision, would it make sense to allow configuration of the port from the user perspective?


- The proxy will be deployed as a DaemonSet to ensure it runs on all nodes which may host the API VIP in the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will deploy the DaemonSet? Will this be a default component on OpenShift clusters? Does it need to be a part of the core payload or can it be a "layered product"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mvazquezc can correct me if I'm wrong but this will be an optional feature that Telco customers can enable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What approaches have you considered for allowing this to be an opt-in feature?

- The nftables rules will be added to the nodes. The proxy will listen on port 8888 and redirect traffic to the OpenShift Ingress Routers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this happen? What are the potential impacts of making modifications to node configurations?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's done via code. In OCP versions >4.17 adding nftables rules won't require node reboots if the MCO gets configured appropriately. In our PoC code you can see how the nftables rules are created here. Configuration templates are here.

In terms of potential impacts: Cause disruption if there are other services listening on the port we configured (since this is supposed to run on CP nodes, the impact should be limited as we don't expect anything running on CP we don't control).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's done via code.

Please explain, in sentences, the steps that will be done via code to do this. What interactions with other components will be needed? What considerations are in place for different scenarios?

The enhancement should be the source of record for decisions being made. Code changes, even PoC code and that may make future readers of the enhancement not able to follow along as easily with the historical decisions made. Readers of enhancements shouldn't have to dig through code anyways to get an understanding of what is happening with the system as part of this proposal.

- The implementation relies on `nftables` for traffic redirection, which must be supported and enabled on the cluster nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should happen if nftables is not enabled on the cluster nodes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK nftables is required for OCP nodes to run. OVN-K requires it (maybe other CNI don't), but still I don't think it can be disabled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any references that state it cannot be disabled? Isn't nftables a systemd service that runs on the nodes? Can someone modify the configurations that MCO applies to mark it as disabled?

Maybe we should consult someone from the MCO team to verify whether or not nftables can be disabled?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe @yuqi-zhang ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MCO doesn't directly manage nftables in any way on the nodes. I believe it's being shipped (and mostly configured) in RHCOS directly.

As a user, you can technically do whatever you want, including disabling nftables via a MachineConfig, but that would be an explicit user-driven decision and not something we manage.

- The demo deployment manifest for the proxy is available [here](https://github.com/mvazquezc/cert-mgr-http01-proxy/blob/main/manifests/deploy-in-ocp.yaml).
- An example implementation can be found in this [repository](https://github.com/mvazquezc/cert-mgr-http01-proxy/tree/main).

### Design Details

- **Proxy Deployment**: The proxy will be deployed using a Kubernetes DaemonSet. The daemonset will implement an nftable rule via pod that runs to completion.
- **Traffic Redirection**: This will use `nftables` rules to redirect incoming traffic on `API:80` to `PROXY:8888`.
- **Security**: The proxy will only handle HTTP traffic for the HTTP01 challenge and will not interfere with other traffic or services.
- **Monitoring**: Logs and metrics will be exposed to help administrators monitor the proxy's behavior and troubleshoot issues.

### Drawbacks

1. **Dependency on nftables**: The solution relies on `nftables`, which may not be available or enabled on all environments.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nftables is available on control plane (and worker) nodes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is always available? What is the expected behavior if it is not available but someone attempts to use this feature?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be a better question for @imiller0 or @mvazquezc if the nftables doesn't exist. I can edit the enhancement proposal with what is discussed.

2. **Additional Resource Usage**: Running the proxy as a DaemonSet introduces additional resource usage on the cluster nodes while the proxy pod is applying its nftable rules.
3. **Complexity**: The solution adds another component to the cluster, which may increase operational complexity.

### Alternatives (Not Implemented)

The alternatives were actually implemented if you look through the presentation [slides](https://docs.google.com/presentation/d/1mJ1pnsPiEwb-U5lHwhM2UkyRmkkLeYxj3cfE4F7dOx0/edit#slide=id.g547716335e_0_260) but the approaches are all listed below.

1. **RHACM Manages Cert Manager Deployment**: RHACM (Red Hat Advanced Cluster Management) manages the deployment of Cert Manager and certificates on the spokes using Policies. Each managed cluster runs its own Cert Manager instance. This approach decentralizes certificate management but requires Cert Manager to be deployed and maintained on each spoke cluster.

2. **Single Addon on the Hub**: A single addon runs on the hub and watches the spoke clusters' APIs for `Certificate` and `CertificateRequest` related events. When these APIs are created, updated, or deleted in the spoke, the addon syncs the contents back and forth between the hub and the spokes. This approach centralizes management but introduces additional complexity in syncing data.

3. **Cert Manager Controller per Spoke**: A Cert Manager controller is configured for each spoke cluster on the hub. These controllers run in the spoke cluster namespace and are configured to use the spoke’s `system:admin` kubeconfig. This approach allows centralized control but requires managing multiple controllers on the hub.

4. **Single Cert Manager Controller on the Hub**: A single Cert Manager controller runs on the hub. Certificates and `CertificateRequests` for each spoke cluster are created with data known beforehand (e.g., API, Ingress, CNFs). The resulting secrets are synced to the spokes via RHACM Policies. This approach simplifies the deployment but requires pre-configured data for each spoke.

More information about the investigation can be found [here](https://docs.google.com/presentation/d/1mJ1pnsPiEwb-U5lHwhM2UkyRmkkLeYxj3cfE4F7dOx0/edit#slide=id.g547716335e_0_260).

### Risks and Mitigations

1. **Proxy Failure**: If the proxy fails, HTTP01 challenges for the API endpoint will not succeed. Mitigation: Use health checks and monitoring to ensure the proxy is running correctly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the impact to users if the HTTP01 challenges fail? Does this prevent someone from accessing their cluster?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It won't prevent users from accessing their cluster. They may need to accept insecure connections, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible, please enumerate the impact, if any, to end-users and other system components during this scenario. Include the steps they may be able to take to workaround or remedy the issue.

2. **Traffic Interference**: The proxy could inadvertently interfere with other traffic. Mitigation: Carefully scope the proxy's functionality to only handle HTTP01 challenge traffic.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would I identify this? What can I do to fix this scenario?

Copy link
Member

@mvazquezc mvazquezc May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should configure alerts for expiring certificates. This way we will be alerted if we are close to renewal time. These alerts can include Warning alerts (when certs are about to expire) and critical alerts (when certs are expired).


### Implementation History

- **2025-03-28**: Enhancement proposal created.

### References

- [Cert Manager Expansion JIRA Epic](https://issues.redhat.com/browse/CNF-13731)
- [ACME HTTP01 Challenge](https://letsencrypt.org/docs/challenge-types/#http-01-challenge)
- [Proxy Code Repository](https://github.com/mvazquezc/cert-mgr-http01-proxy/tree/main)
- [Deployment Manifest](https://github.com/mvazquezc/cert-mgr-http01-proxy/blob/main/manifests/deploy-in-ocp.yaml)

### Non-Goals

- This enhancement does not aim to replace or modify the existing OpenShift Ingress functionality.
- It does not provide support for non-HTTP01 challenge types (e.g., DNS-01).
- It does not address certificate management for endpoints other than the API endpoint (`api.cluster.example.com`).
- It does not provide a solution for environments where `nftables` is not supported.

### Workflow Description

1. Cert Manager initiates an HTTP01 challenge for the API endpoint (`api.cluster.example.com`).
2. The HTTP01 challenge request is directed to the API VIP on port 80.
3. The HTTP01 Challenge Proxy intercepts the traffic using `nftables` and redirects it to the proxy pod on port 8888.
4. The proxy pod forwards the request to the OpenShift Ingress Router, which serves the challenge response from the Cert Manager challenge pod.
5. The ACME CA validates the challenge and issues the certificate for the API endpoint.
Comment on lines +153 to +157
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you be a bit more descriptive in the workflow here? What steps would a user actually take here to trigger this workflow end-to-end? What steps do other components in the interaction take?

Cert Manager initiates an HTTP01 challenge for the API endpoint (api.cluster.example.com).

Who/What triggers this?

The HTTP01 Challenge Proxy intercepts the traffic using nftables and redirects it to the proxy pod on port 8888.

Who/What deploys this? When in the workflow is this actually deployed?

I'd recommend taking a look at the template for this section https://github.com/openshift/enhancements/blob/master/guidelines/enhancement_template.md#workflow-description - it has some links to some good examples.


### Topology Considerations

- **Standard Clusters**: The API VIP is hosted on control plane nodes. The proxy ensures that HTTP01 challenges are redirected to the OpenShift Ingress Routers.
- **Compact Clusters**: The proxy handles scenarios where the API VIP node may or may not host an OpenShift Router, ensuring consistent challenge redirection.
- **SNO (Single Node OpenShift)**: The proxy is not strictly required in this topology, as the API and wildcard FQDNs resolve to the same IP. However, it can still be deployed for consistency.

#### Hypershift / Hosted Control Planes

This enhancement does not directly apply to Hypershift deployments, as the API endpoint management in Hypershift differs from baremetal environments. However, the proxy's design could be adapted for similar use cases in Hypershift if needed.

#### Standalone Clusters

For standalone clusters, the proxy ensures that HTTP01 challenges for the API endpoint are redirected to the OpenShift Ingress Routers, regardless of whether the API VIP node hosts a router.

#### Single-node Deployments or MicroShift

In SNO or MicroShift deployments, the proxy is not strictly required, as the API and wildcard FQDNs resolve to the same IP. However, deploying the proxy ensures consistency and simplifies certificate management.

## Test Plan

1. **Unit Tests**: Validate the proxy's functionality in isolation, including traffic redirection and error handling.
2. **Integration Tests**: Deploy the proxy in a test cluster and verify that HTTP01 challenges for the API endpoint succeed.
3. **Performance Tests**: Measure the proxy's impact on cluster performance and resource usage.
4. **Topology Tests**: Test the proxy in Standard Clusters, Compact Clusters, and SNO environments to ensure compatibility.

## Graduation Criteria

### Dev Preview -> Tech Preview

- The proxy is implemented and tested in development environments.
- Documentation is available for deploying and configuring the proxy.

### Tech Preview -> GA

- The proxy is deployed in production environments and successfully handles HTTP01 challenges for various OCP topologies.
- Performance and reliability meet production-grade requirements.

### Removing a deprecated feature

This enhancement does not deprecate any existing features.

## Upgrade / Downgrade Strategy

Updated versions of the proxy can be applied to the cluster similar to initial deployment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond the proxy itself, how do you plan to handle typical upgrade situations where there may older and newer instances of the proxy running at the same time due to a rolling upgrade? Are there any special considerations needed for this scenario?

What happens if an upgrade fails midway through? What steps would a user need to take to get their cluster back to the previous state?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't have multiple instances of the proxy running. They listen on port :8888 in the host network. We can control that with a recreate policy for the deployment/daemonset. In terms of new/old versions the only thing that may be changing (and I doubt it) is the nftable rule that redirects traffic from port 80 to proxy port 8888. But still we should ensure backwards compatibility in the proxy code.


## Version Skew Strategy

Any changes to the proxy's behavior will be documented to ensure compatibility with older cluster versions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does there need to be any special considerations for the interactions between the component that deploys the proxy and updates node configurations and the component that rolls out node configuration changes?

During upgrades there will always be version skew between components, so capturing how your component handles this is helpful.


## Operational Aspects of API Extensions

- **Monitoring**: Logs and metrics will be exposed to help administrators monitor the proxy's behavior and troubleshoot issues.
- **Resource Usage**: The proxy's resource requirements will be minimal, as it only handles HTTP01 challenge traffic.
- **Failure Recovery**: Health checks will ensure that the proxy is running correctly, and failed pods will be automatically restarted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the proxy enters a CrashLoopBackoff? If there is a negative impact to the cluster, how can an admin/user disable or remove the proxy?


## Support Procedures

Support for the proxy will be provided through standard OpenShift support channels. Administrators can refer to the deployment documentation and logs for troubleshooting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see a bit more information here "standard support channels" isn't all that helpful in representing how someone providing support can evaluate if this component is behaving correctly or not and what steps they can take to mitigate issues that it may be causing on the system.

The template section has some good examples for things to take into consideration: https://github.com/openshift/enhancements/blob/master/guidelines/enhancement_template.md#support-procedures