Skip to content

Latest commit

 

History

History
356 lines (250 loc) · 54.2 KB

File metadata and controls

356 lines (250 loc) · 54.2 KB

Yelb Architecture on Azure

Azure offers several options for deploying a web application such as the Yelb application on an Azure Kubernetes Service (AKS) cluster and securing it with a web application firewall. Currently, the Azure services that support Azure Web Application Firewall (WAF) are Azure Front Door and Azure Application Gateway. You can find more information about them here:

Please note that Application Gateway for Containers currently does not support Azure Web Application Firewall.

Alternatively, you can use the ModSecurity open-source web access firewall with the NGINX ingress controller instead of the Azure Web Application Firewall. Each of these solutions has its own benefits, caveats, and suggested scenarios, which we can explore further.

Table of Contents

Azure Load Balancers and Web Access Firewall

Prior to examining each solution, let's take a brief look at the Azure services utilized by the proposed architectural solutions:

Azure Application Gateway

Azure Application Gateway deployed in a dedicated subnet within the same virtual network hosting the AKS cluster or in a peered virtual network. Azure Application Gateway is a web traffic regional load balancer that enables customers to manage the inbound traffic to multiple downstream web applications and REST APIs. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port. The Application Gateway instead is an application layer (OSI layer 7) load balancer. Azure Application Gateway provides a rich set of features:

An Application Gateway serves as the single point of contact for client applications. It distributes incoming application traffic across multiple backend pools, which include public and private Azure Load Balancers, Azure virtual machines, Virtual mMchine Scale Sets, hostnames, Azure App Service, and on-premises/external servers. Azure Application Gateway uses several components, shown in the following picture, to distrubute the incoming traffic across the backend applications.

The components used in an application gateway

For more information, see How an Application Gateway works.

Azure Web Access Firewall (WAF)

Azure Web Application Firewall (WAF) that provides centralized protection of web applications from common exploits and vulnerabilities. WAF is based on rules from the OWASP (Open Web Application Security Project) core rule sets.

Azure Web Application Firewall (WAF) also provides the ability to create custom rules that are evaluated for each request. These rules hold a higher priority than the rest of the rules in the managed rule sets. The custom rules contain a rule name, rule priority, and an array of matching conditions. If these conditions are met, an action is taken (to allow or block).

Web applications can be the target of malicious attacks that exploit common, known vulnerabilities that include SQL injection attacks, DDOS attacks, and cross-site scripting attacks. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching, and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a Web Application Firewall enabled application gateway very easily.

Azure Application Gateway allows the association of a separate WAF policy to each individual listener. For example, if there are three sites behind the same Application Gateway or WAF, you can configure three separate WAF policies (one for each listener) to customize the exclusions, custom rules, and managed rule sets for one site without affecting the other two. If you want a single policy to apply to all sites, you can just associate the policy with the Application Gateway, rather than the individual listeners, to make it apply globally. Application Gateway also supports per-URI WAF Policies. This feature requires the use of a Path-based routing rule instead of a basic routing rule and requires the definition of a URL Path Map where a specific WAF policy can be associated with a given URL. For more information, see Configure per-site WAF policies using Azure PowerShell. The order of precedence for WAF policies is as follows:

  • If a per-URI WAF policy exists for the current path, this will take effect / apply and no other WAF policy will apply
  • If no per-URI WAF policy exists for the current path, but a WAF policy exists for the current listener, this policy will apply, and no other WAF policy will take effect
  • If no WAF policy exists for the current URI and listener, the global WAF policy will apply, if any.

The Application Gateway WAF can be configured to run in the following two modes:

  • Detection mode: Monitors and logs all threat alerts. You turn on logging diagnostics for Application Gateway in the Diagnostics section. You must also make sure that the WAF log is selected and turned on. Web application firewall doesn't block incoming requests when it's operating in Detection mode.
  • Prevention mode: Blocks intrusions and attacks that the rules detect. The attacker receives a "403 unauthorized access" exception, and the connection is closed. Prevention mode records such attacks in the WAF logs.

You can configure the Application Gateway to store diagnostic logs and metrics to Log Analytics. In this case, also WAF logs will be stored in the Log Analytics workspace and they can be queried using Kusto Query Language.

Azure Application Gateway for Containers

The Application Gateway for Containers is a new cutting-edge Azure service that offers load balancing and dynamic traffic management for applications running in a Kubernetes cluster. As part of Azure's Application Load Balancing portfolio, this innovative product provides an enhanced experience for developers and administrators. The Application Gateway for Containers represents the evolution of the Application Gateway Ingress Controller (AGIC) and enables Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway load balancer. Azure Application Gateway for Containers enables you to host multiple web applications on the same port, utilizing unique backend services. This allows for efficient multi-site hosting and simplifies the management of your containerized applications. The Application Gateway for Containers fully supports both the Gateway API and Ingress API Kubernetes objects for traffic load balancing.

Azure Application Gateway for Containers offers a range of features and benefits, including:

  • Load Balancing: The service efficiently distributes incoming traffic across multiple containers, ensuring optimal performance and scalability. For more information, see Load balancing features.
  • Implementation of Gateway API: Application Gateway for Containers supports the Gateway API, which allows for the definition of routing rules and policies in a Kubernetes-native way. For more information, see Implementation of Gateway API.
  • Custom Health Probe: You can define custom health probes to monitor the health of your containers and automatically route traffic away from unhealthy instances. For more information, see Custom health probe for Application Gateway for Containers.
  • Session Affinity: The service provides session affinity, allowing you to maintain a consistent user experience by routing subsequent requests from the same client to the same container. For more information, see Application Gateway for Containers session affinity overview.
  • TLS Policy: Application Gateway for Containers supports TLS termination, allowing you to offload the SSL/TLS encryption and decryption process to the gateway. For more information, see Application Gateway for Containers TLS policy overview.
  • Header Rewrites: Application Gateway for Containers offers the capability to rewrite HTTP headers of client requests and responses from backend targets. Header Rewrites utilize the IngressExtension custom resource definition (CRD) of the Application Gateway for Containers. For more details, refer to the documentation on Header Rewrites for Ingress API and Gateway API.
  • URL Rewrites: Application Gateway for Containers allows you to modify the URL of a client request, including the hostname and/or path. When Application Gateway for Containers initiates the request to the backend target, it includes the newly rewritten URL. Additional information on URL Rewrites can be found in the documentation for Ingress API and Gateway API.

For more information, see:

Azure Front Door

Azure Front Door is a cloud Content Delivery Network (CDN) offered by Microsoft that allows for fast, reliable, and secure access to web content. It operates using Microsoft's global edge network, which includes numerous points of presence (PoPs) distributed around the world. Some of the supported features of Azure Front Door include:

  • Global delivery scale: Leveraging over 118 edge locations across 100 metro cities, Azure Front Door improves application performance and reduces latency. It also supports anycast network and split TCP connections.
  • Modern app and architecture delivery: Azure Front Door integrates with DevOps tools, supports custom domains, and enables load balancing and routing across different origins. It also provides enhanced rules engine capabilities and built-in analytics and reporting.
  • Simple and cost-effective: Azure Front Door offers unified static and dynamic delivery in a single tier, providing caching, SSL offload, and DDoS protection. It includes free managed SSL certificates and has a simplified cost model.
  • Intelligent secure internet perimeter: Azure Front Door provides built-in layer 3-4 DDoS protection, seamless integration with Web Application Firewall (WAF), and Azure DNS for domain protection. It also offers protection against layer 7 DDoS attacks and malicious actors using Bot manager rules. Additionally, it supports private connections to backend services using Private Link.

For more information on Azure Front Door and its features, you can visit the Azure Front Door documentation.

Solutions to deploy and protect the Yelb application on Azure

This section provides an overview of various solutions to deploy the Yelb application to an Azure Kubernetes Service (AKS) cluster and secure access to its UI service.

One option is to deploy the Yelb application to an AKS cluster and secure access to its UI service using Azure Web Application Firewall (WAF). Azure WAF provides a layer of protection against common web vulnerabilities and allows you to define security rules to protect your application. For more information on deploying Yelb with Azure WAF, refer to the Azure Web Application Firewall documentation.

Another option is to deploy the Yelb application to an AKS cluster and secure access to its UI service using an open source web access firewall like ModSecurity. ModSecurity is a widely used web application firewall that can provide additional security and protection for your application. To learn more about deploying Yelb with ModSecurity, refer to the ModSecurity documentation.

Use Application Gateway WAFv2 with NGINX Ingress controller

In this solution, the Yelb application is deployed hosted by an Azure Kubernetes Service (AKS) cluster and exposed via an ingress controller such as NGINX ingress controller. The ingress controller service is exposed via an internal (or private) load balancer. Internal load balancers are used to load balance traffic inside a virtual network, in this case the virtual network hosting the AKS cluster. An internal load balancer frontend can be accessed from an on-premises network in a hybrid scenario. For more information on how to use an internal load balancer to restrict access to your applications in Azure Kubernetes Service (AKS), see Use an internal load balancer with Azure Kubernetes Service (AKS).

This sample supports installing the a managed NGINX ingress controller with the application routing add-on or an unmanaged NGINX ingress controller using the Helm chart. The application routing add-on with NGINX ingress controller provides the following features:

For other configurations, see:

The Yelb application is secured with an Azure Application Gateway resource that is deployed in a dedicated subnet within the same virtual network as the AKS cluster or in a peered virtual network. The access to the Yelb application hosted by Azure Kubernetes Service (AKS) and exposed via an Azure Application Gateway is secured by the Azure Web Application Firewall (WAF) that provides centralized protection of web applications from common exploits and vulnerabilities. The solution architecture is depicted in the diagram below.

Application Gateway WAFv2 with NGINX Ingress controller

The solution architecture is designed as follows:

  • The AKS cluster is deployed with the following features:
    • Network Configuration: Azure CNI Overlay
    • Network Dataplane: Cilium
    • Network Policy: Cilium
  • The Application Gateway handles TLS termination and communicates with the backend application over HTTPS.
  • The Application Gateway Listener utilizes an SSL certificate obtained from Azure Key Vault.
  • The Azure WAF Policy associated to the Listener is used to run OWASP rules and custom rules against the incoming request and block malicous attacks.
  • The Application Gateway Backend HTTP Settings are configured to invoke the Yelb application via HTTPS on port 443.
  • The Application Gateway Backend Pool and Health Probe are set to call the NGINX ingress controller through the AKS internal load balancer using HTTPS.
  • The NGINX ingress controller is deployed to use the AKS internal load balancer instead of the public one.
  • The Azure Kubernetes Service (AKS) cluster is configured with the Azure Key Vault provider for Secrets Store CSI Driver addonto retrieve secret, certificates, and keys from Azure Key Vault via a CSI volume.
  • A SecretProviderClass is used to retrieve the same certificate used by the Application Gateway from Key Vault.
  • An Kubernetes ingress object employs the NGINX ingress controller to expose the application via HTTPS through the AKS internal load balancer.
  • The Yelb service is of type ClusterIP, as it is exposed via the NGINX ingress controller.

The Application Gateway Listener and the Kubernetes ingress are configured to use the same hostname. Here are the reasons why it is important to use the same hostname for a service proxy and a backend web application:

  • Preservation of Session State: When a different hostname is used between the proxy and the backend application, session state can get lost. This means that user sessions may not persist properly, resulting in a poor user experience and potential loss of data.
  • Authentication Failure: If the hostname differs between the proxy and the backend application, authentication mechanisms may fail. This can lead to users being unable to login or access secure resources within the application.
  • Inadvertent Exposure of URLs: If the hostname is not preserved, there is a risk that backend URLs may be exposed to end users. This can lead to potential security vulnerabilities and unauthorized access to sensitive information.
  • Cookie Issues: Cookies play a crucial role in maintaining user sessions and passing information between the client and the server. When the hostname differs, cookies may not work as expected, leading to issues such as failed authentication, improper session handling, and incorrect redirection.
  • End-to-End TLS/SSL Requirements: If end-to-end TLS/SSL is required for secure communication between the proxy and the backend service, a matching TLS certificate for the original hostname is necessary. Using the same hostname simplifies the certificate management process and ensures that secure communication is established seamlessly.

By using the same hostname for the service proxy and the backend web application, these potential problems can be avoided. The backend application will see the same domain as the web browser, ensuring that session state, authentication, and URL handling are all functioning correctly. This is especially important in platform as a service (PaaS) offerings, where the complexity of certificate management can be reduced by utilizing the managed TLS certificates provided by the PaaS service. The following diagram shows the steps for the message flow during deployment and runtime.

Application Gateway WAFv2 with NGINX Ingress controller details

Deployment workflow

The following steps describe the deployment process. This workflow corresponds to the green numbers in the preceding diagram.

  1. A security engineer generates a certificate for the custom domain that the workload uses, and saves it in an Azure key vault. You can obtain a valid certificate from a well-known certification authority (CA).
  2. A platform engineer specifies the necessary information in the main.bicepparams Bicep parameters file and deploys the Bicep modules to create the Azure resources. The necessary information includes:
    • A prefix for the Azure resources.
    • The name and resource group of the existing Azure Key Vault that holds the TLS certificate for the workload hostname and the Azure Front Door custom domain.
    • The name of the certificate in the key vault.
    • The name and resource group of the DNS zone that's used to resolve the Azure Front Door custom domain.
  3. The deployment script uses Helm and YAML manifests to create the NGINX ingress controller and a sample httpbin web application. The script defines a SecretProviderClass that retrieves the TLS certificate from the specified Azure key vault by using the user-defined managed identity of the Azure Key Vault provider for Secrets Store CSI Driver. The script also creates a Kubernetes secret. The deployment and ingress objects are configured to use the certificate that's stored in the Kubernetes secret.
  4. The Application Gateway Listener retrieves the TLS certificate from Azure key Vault.
  5. When a DevOps engineer deploys the Yelb application, the Kubernetes ingress object uses the certificate retrieved by the Azure Key Vault provider for Secrets Store CSI Driver from Key Vault to expose the Yelb UI service via HTTPS.

Runtime workflow

The following steps describe the message flow for a request that an external client application initiates during runtime. This workflow corresponds to the orange numbers in the preceding diagram.

  1. The client application calls the Yelb application using its hostname. The DNS zone that's associated with the custom domain of the Application Gateway Listener uses an A record to resolve the DNS query with the addres of the Azure Public IP used by the Frontend IP Configuration of the Application Gateway.
  2. The request is sent to the Azure Public IP used by the Frontend IP Configuration of the Application Gateway.
  3. The Application Gateway performs thw following actions.
    • The Application Gateway handles TLS termination and communicates with the backend application over HTTPS.
    • The Application Gateway Listener utilizes an SSL certificate obtained from Azure Key Vault.
    • The Azure WAF Policy associated to the Listener is used to run OWASP rules and custom rules against the incoming request and block malicous attacks.
    • The Application Gateway Backend HTTP Settings are configured to invoke the Yelb application via HTTPS on port 443.
  4. The Application Gateway Backend Pool calls the NGINX ingress controller through the AKS internal load balancer using HTTPS.
  5. The request is sent to one of the agent nodes that hosts a pod of the NGINX ingress controller.
  6. One of the NGINX ingress controller replicas handles the request and sends the request to one of the service endpoints of the yelb-ui service.
  7. The yelb-ui calls the yelb-appserver service.
  8. The yelb-appserver calls the yelb-db and yelb-cache services.

Here you can find Bicep templates, Bash scripts, and YAML manifests to create this architecture and deploy the Yelb application.

Use Application Gateway Ingress Controller and Azure WAF Policy

In this architecture, the Application Gateway Ingress Controller was installed using the AGIC add-on for AKS. You can also install the Application Gateway Ingress Controller via a Helm chart.

Use Application Gateway Ingress Controller and Azure WAF Policy

The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it's hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet.

The Ingress Controller runs in its own pod on the customer's AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the Azure Resource Manager (ARM). For more information, see What is Application Gateway Ingress Controller?.

The primary benefit of deploying AGIC as an AKS add-on is that it's much simpler than deploying through Helm. For a new setup, you can deploy a new Application Gateway and a new AKS cluster with AGIC enabled as an add-on in one line in Azure CLI. The add-on is also a fully managed service, which provides added benefits such as automatic updates and increased support. Both ways of deploying AGIC (Helm and AKS add-on) are fully supported by Microsoft. Additionally, the add-on allows for better integration with AKS as a first class add-on. The Application Gateway Ingress Controller (AGIC) offers the following advantages.

  1. Native Integration: AGIC provides native integration with Azure services, specifically Azure Application Gateway. This allows for seamless and efficient routing of traffic to services running on Azure Kubernetes Service (AKS).
  2. Simplified Deployment: Deploying AGIC as an AKS add-on is straightforward and simpler compared to other methods like using Helm charts. It enables a quick and easy setup of an Application Gateway and AKS cluster with AGIC enabled.
  3. Fully Managed Service: AGIC as an add-on is a fully managed service, providing benefits such as automatic updates and increased support from Microsoft. It ensures the Ingress Controller remains up-to-date and adds an additional layer of support.

However, there are also some disadvantages and limitations to consider when using AGIC:

  1. Single Cloud Approach: AGIC is primarily adopted by customers who adopt a single-cloud approach, usually focusing on Azure. It may not be the best choice for customers who require a multi-cloud architecture, where deployment across different cloud platforms like AWS and GCP is essential. In this case customers may decide to use a cloud-agnostic ingress controller such as NGINX, Traefik, or HAProxy to avoid vendo-lockin issues.
  2. Container Network Interface Support: AGIC is not supported by all Container Network Interfaces (CNI) configurations. For example, the Azure CNI Overlay does not currently support AGIC. It is important to verify that the chosen CNI is compatible with AGIC before implementation.

For customers aiming for a multi-cloud approach or utilizing specific CNIs like Azure CNI Overlay, alternative ingress controllers like NGINX, HAProxy, or Traefik offer more flexibility and broader compatibility across different cloud platforms. For more information on the Azure Application Gateway Ingress Controller, see the following resources:

Use Azure Application Gateway for Containers

This solution leverages the cutting-edge Application Gateway for Containers, a new Azure service that provides load balancing and dynamic traffic management for applications in a Kubernetes cluster.

Use Azure Application Gateway for Containers

This innovative product enhances the experience for developers and administrators as part of Azure's Application Load Balancing portfolio. It builds upon the capabilities of the Application Gateway Ingress Controller (AGIC) and allows Azure Kubernetes Service (AKS) customers to utilize Azure's native Application Gateway load balancer. This guide will walk you through deploying an Azure Kubernetes Service (AKS) cluster with an Application Gateway for Containers in a fully-automated manner, supporting both bring your own (BYO) and managed by ALB deployments. As described in the previous section, the Application Gateway for Containers offers several features:

  • Load Balancing: Efficiently distributes incoming traffic across multiple containers for optimal performance and scalability.
  • Gateway API Implementation: Supports the Gateway API, allowing you to define routing rules and policies in a Kubernetes-native way.
  • Custom Health Probe: Define custom health probes to monitor container health and automatically route traffic away from unhealthy instances.
  • Session Affinity: Provides session affinity, routing subsequent requests from the same client to the same container for a consistent user experience.
  • TLS Policy: Supports TLS termination, allowing SSL/TLS encryption and decryption to be offloaded to the gateway.
  • Header Rewrites: Rewrite HTTP headers of client requests and responses from backend targets using the IngressExtension custom resource definition. Learn more about Ingress API and Gateway API.
  • URL Rewrites: Modify the URL of client requests, including hostname and/or path, and include the newly rewritten URL when initiating requests to backend targets. Find more information on Ingress API and Gateway API.

However, at this time the Azure Application Gateway for Containers has some limitations. For example, the following features are not currently supported:

It's important to consider that while Application Gateway for Containers can be a great choice for customers adopting a single-cloud approach, particularly focusing on Azure, it may not be the best fit for customers requiring a multi-cloud architecture. If deployment across different cloud platforms such as AWS and GCP is essential, customers might opt for a cloud-agnostic ingress controller like NGINX, Traefik, or HAProxy to avoid vendor lock-in issues. For more information, see Deploying an Azure Kubernetes Service (AKS) Cluster with Application Gateway for Containers.

Use Azure Front Door

The following solution uses Azure Front Door as a global layer 7 load balancer to securely expose and protect a workload that runs in Azure Kubernetes Service (AKS) by using the Azure Web Application Firewall, and an Azure Private Link service.

Use Azure Front Door

This solution uses Azure Front Door Premium, end-to-end TLS encryption, Azure Web Application Firewall, and a Private Link service to securely expose and protect a workload that runs in AKS.

This architecture uses the Azure Front Door TLS and Secure Sockets Layer (SSL) offload capability to terminate the TLS connection and decrypt the incoming traffic at the front door. The traffic is reencrypted before it's forwarded to the origin, which is a web application that's hosted in an AKS cluster. HTTPS is configured as the forwarding protocol on Azure Front Door when Azure Front Door connects to the AKS-hosted workload that's configured as an origin. This practice enforces end-to-end TLS encryption for the entire request process, from the client to the origin. For more information, see Secure your origin with Private Link in Azure Front Door Premium.

The NGINX ingress controller exposes the AKS-hosted web application. The NGINX ingress controller is configured to use a private IP address as a front-end IP configuration of the kubernetes-internal internal load balancer. The NGINX ingress controller uses HTTPS as the transport protocol to expose the web application. For more information, see Create an ingress controller by using an internal IP address.

This solution is recommended in those scenarios where customers deploy the same web application across multiple regional AKS clusters for business continuity and disaster recovery, or even across multiple cloud platforms or on-premises installations. In this case, Front Door can forward incoming calls to one of the backends also known as origins using one of the available routing methods.

  • Latency: The latency-based routing ensures that requests are sent to the lowest latency origins acceptable within a sensitivity range. In other words, requests get sent to the nearest set of origins in respect to network latency.
  • Priority: A priority can be set to your origins when you want to configure a primary origin to service all traffic. The secondary origin can be a backup in case the primary origin becomes unavailable.
  • Weighted: A weighted value can be assigned to your origins when you want to distribute traffic across a set of origins evenly or according to the weight coefficients. Traffic gets distributed by the weight value if the latencies of the origins are within the acceptable latency sensitivity range in the origin group.
  • Session Affinity: You can configure session affinity for your frontend hosts or domains to ensure requests from the same end user gets sent to the same origin.

The following diagram shows the steps for the message flow during deployment and runtime.

Front Door Flow

Deployment workflow

The following steps describe the deployment process. This workflow corresponds to the green numbers in the preceding diagram.

  1. A security engineer generates a certificate for the custom domain that the workload uses, and saves it in an Azure Key Vault. You can obtain a valid certificate from a well-known certification authority (CA).
  2. A platform engineer specifies the necessary information in the parameters and deploys the infrastructure using an Infrastructure as Code (IaC) technology such as Terraform or Bicep. The necessary information includes:
    • A prefix for the Azure resources.
    • The name and resource group of the existing Azure Key Vault that holds the TLS certificate for the workload hostname and the Azure Front Door custom domain.
    • The name of the certificate in the key vault.
    • The name and resource group of the DNS zone that's used to resolve the Azure Front Door custom domain.
  3. You can use a deployment script to install the following packages to your AKS cluster. For more information, check the parameters section of the Bicep module:
  4. An Azure front door secret resource is used to manage and store the TLS certificate that's in the Azure key vault. This certificate is used by the custom domain that's associated with the Azure Front Door endpoint.

Note

At the end of the deployment, you need to approve the private endpoint connection before traffic can pass to the origin privately. For more information, see Secure your origin with Private Link in Azure Front Door Premium. To approve private endpoint connections, use the Azure portal, the Azure CLI, or Azure PowerShell. For more information, see Manage a private endpoint connection.

Runtime workflow

The following steps describe the message flow for a request that an external client application initiates during runtime. This workflow corresponds to the orange numbers in the preceding diagram.

  1. The client application uses its custom domain to send a request to the web application. The DNS zone that's associated with the custom domain uses a CNAME record to redirect the DNS query for the custom domain to the original hostname of the Azure Front Door endpoint.
  2. Azure Front Door traffic routing occurs in several stages. Initially, the request is sent to one of the Azure Front Door points of presence. Then Azure Front Door uses the configuration to determine the appropriate destination for the traffic. Various factors can influence the routing process, such as the Azure front door caching, web application firewall (WAF), routing rules, rules engine, and caching configuration. For more information, see Routing architecture overview.
  3. Azure Front Door forwards the incoming request to the Azure private endpoint that's connected to the Private Link service that exposes the AKS-hosted workload.
  4. The request is sent to the Private Link service.
  5. The request is forwarded to the kubernetes-internal AKS internal load balancer.
  6. The request is sent to one of the agent nodes that hosts a pod of the NGINX ingress controller.
  7. One of the NGINX ingress controller replicas handles the request.
  8. The NGINX ingress controller forwards the request to one of the workload pods.

For more information, see Use Azure Front Door to secure AKS workloads

Use NGINX Ingress Controller and ModSecurity

The following solution makes use of NGINX ingress controller to expose the Yelb application and ModSecurity to block any malicious or suspicious traffic based on predefined OWASP or custom rules. ModSecurity is an open-source web application firewall (WAF) that is compatible with popular web servers such as Apache, NGINX, and ISS. It provides protection from a wide range of attacks by using a powerful rule-definition language.

Use NGINX Ingress Controller and ModSecurity

ModSecurity can be used with the NGINX Ingress controller to provide an extra layer of security to web applications exposed via Kubernetes. The NGINX Ingress controller acts as a reverse proxy, forwarding traffic to the web application, while ModSecurity inspects the incoming requests and blocks any malicious or suspicious traffic based on the defined rules.

Using ModSecurity with NGINX Ingress controllers in Kubernetes provides a cloud-agnostic solution that can be deployed on any managed Kubernetes cluster on any cloud platform. This means the solution can be deployed "as is" on various cloud platforms, including:

The cloud-agnostic nature of this solution allows multi-cloud customers to deploy and configure their web applications, such as Yelb, consistently across different cloud platforms without significant modifications. It provides flexibility and portability, enabling you to switch between cloud providers or have a multi-cloud setup while maintaining consistent security measures. Here you can find Bicep templates, Bash scripts, and YAML manifests to create this architecture and deploy the Yelb application. For more information, see the following resources:

Conclusions

In conclusion, there are multiple architectures available on Azure to deploy and protect the Yelb application on Azure Kubernetes Service (AKS). These solutions include using Azure Web Application Firewall (WAF) with Azure Application Gateway or Azure Front Door, leveraging the open-source web access firewall ModSecurity with the NGINX ingress controller, or using the cutting-edge Application Gateway for Containers. Each of these solutions offers its own set of features and benefits, allowing you to choose the one that best suits your requirements. Whether you need regional load balancing, integrated WAF protection, or a cloud-agnostic approach, Azure provides the necessary tools and services to securely deploy and protect your Yelb application.