You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by brianehlert July 11, 2023
NGINX Ingress Controller customers have expressed a desire to be able to have the continued benefits of NGINX Ingress Controller balancing traffic directly to all of the pods of a Service, but they want the ability to say "if all of the pods of the backend service become unhealthy, direct the traffic to this alternate location"
In nginx.conf this can be achieved using the backup directive under the upstream block. Where an upstream server can be set as the backup when all the other servers fail (fail to respond, are deemed unhealthy via active or passive health checks, etc).
The key here is that a local service has fully failed and traffic needs to be forwarded to some other target.
This is different than using weights. When using weights such as you might for a blue/green, there always runs the risk of a low number of requests being routed to the alternative destination. And that does not work for all applications or situations. For example, NIC today supports weights of 1 - 99. Which would mean if two upstream services are defined 99 requests could be sent to one service and request 100 would be sent to the other. If this is acceptable for the application, this can be achieved today.
Since NGINX Ingress Controller is continuously updating the Upstream server list, it is necessary to represent this concept in YAML.
What I am proposing is this:
In this example the backup is another service endpoint that is represented by an ExternalName service. It only has to adhere to the limitation of ExternalName service.
I am also limiting the target, because it is a backup, to one service target. Therefore if the Service was some secondary service int he cluster, its Service name would be resolved and the Service cluster IP would be the target for backup.
### Tasks
- [x] Update documentation
- [x] Add examples for VS and TS
- [ ] Add Python integration tests
- [ ] https://github.com/nginxinc/kubernetes-ingress/issues/4774
The text was updated successfully, but these errors were encountered:
Discussed in #4091
Originally posted by brianehlert July 11, 2023
NGINX Ingress Controller customers have expressed a desire to be able to have the continued benefits of NGINX Ingress Controller balancing traffic directly to all of the pods of a Service, but they want the ability to say "if all of the pods of the backend service become unhealthy, direct the traffic to this alternate location"
In nginx.conf this can be achieved using the backup directive under the upstream block. Where an upstream server can be set as the backup when all the other servers fail (fail to respond, are deemed unhealthy via active or passive health checks, etc).
The key here is that a local service has fully failed and traffic needs to be forwarded to some other target.
This is different than using weights. When using weights such as you might for a blue/green, there always runs the risk of a low number of requests being routed to the alternative destination. And that does not work for all applications or situations. For example, NIC today supports weights of 1 - 99. Which would mean if two upstream services are defined 99 requests could be sent to one service and request 100 would be sent to the other. If this is acceptable for the application, this can be achieved today.
Since NGINX Ingress Controller is continuously updating the Upstream server list, it is necessary to represent this concept in YAML.
What I am proposing is this:
In this example the backup is another service endpoint that is represented by an ExternalName service. It only has to adhere to the limitation of ExternalName service.
I am also limiting the target, because it is a backup, to one service target. Therefore if the Service was some secondary service int he cluster, its Service name would be resolved and the Service cluster IP would be the target for backup.
The text was updated successfully, but these errors were encountered: