- Docker
- kubectl
- Azure CLI
- Helm3
For every powershell script mentioned below there is a bash version in scripts-bash folder.
In this sample we'll be using Azure Kubernetes Service, but you can install Dapr on any Kubernetes cluster. Run this script to deploy an AKS cluster or follow the steps below.
-
Log in to Azure:
az login
-
Set the default subscription:
az account set -s <subscription-id>
-
Create a resource group:
az group create --name <resource-group-name> --location <location> (ex: westus2)
-
Create an Azure Kubernetes Service cluster:
az aks create --resource-group <resource-group-name> --name <cluster-name> --node-count 2 --kubernetes-version 1.17.9 --enable-addons http_application_routing --generate-ssh-keys --location <location>
-
Connect to the cluster:
az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>
References:
Run this script to install Dapr on the Kubernetes cluster or follow the steps below.
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
kubectl create namespace dapr-system
helm install dapr dapr/dapr --namespace dapr-system
References:
Run this script to execute steps 1 through 4 or follow the steps below.
-
Create a storage account of kind StorageV2 (general purpose v2) in your Azure Subscription:
az storage account create ` --name <storage-account-name> ` --resource-group <resource-group-name> ` --location <location> ` --sku Standard_RAGRS ` --kind StorageV2
-
Create a new blob container in your storage account:
az storage container create ` --name orders ` --account-name <storage-account-name> ` --auth-mode login
-
Generate a shared access signature for the storage account:
az storage account generate-sas ` --account-name <storage-account-name> ` --expiry <YYYY-MM-DD> ` --https-only ` --permissions rwdlacup ` --resource-types sco ` --services bfqt
-
Copy one of the storage account access key values:
az storage account keys list --account-name <storage-account-name>
-
Replace <container_base_url> in batchProcessor/config.json with
https://<storage-account-name>.blob.core.windows.net/orders/
. -
Replace <storage_sas_token> in batchProcessor/config.json with the SAS token that you generated earlier (make sure to leave a "?" before the pasted SAS).
-
Update batchReceiver/config.json with your storage account name, resource group name and Azure subscription ID.
-
Replace <storage_account_name> and <storage_account_access_key> in deploy/blob-storage.yaml with your storage account name and the access key value you copied earlier.
References:
In this section we will deploy an NGINX ingress controller with a static public IP and map the IP to a DNS name.
Run this script to execute steps 1 through 6 or follow the steps below.
-
Initialize variables:
$resourceGroup = "<resource-group-name>" $clusterName = "<aks-name>" # Choose a name for your public IP address which we will use in the next steps $publicIpName = "<public-ip-name>" # Choose a DNS name which we will create and link to the public IP address in the next steps. Your fully qualified domain name will be: <dns-label>.<location>.cloudapp.azure.com $dnsName = "<dns-label>"
-
Get cluster resource group name:
$clusterResourceGroupName = az aks show --resource-group $resourceGroup --name $clusterName --query nodeResourceGroup -o tsv Write-Host "Cluster Resource Group Name:" $clusterResourceGroupName
-
Create a public IP address with the static allocation method in the AKS cluster resource group obtained in the previous steps:
$ip = az network public-ip create --resource-group $clusterResourceGroupName --name $publicIpName --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv Write-Host "IP:" $ip
-
Create a namespace for your ingress resources:
kubectl create namespace ingress-basic
-
Use Helm to deploy an NGINX ingress controller:
helm repo update helm install nginx-ingress stable/nginx-ingress ` --namespace ingress-basic ` --set controller.replicaCount=2 ` --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux ` --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux ` --set controller.service.loadBalancerIP=$ip `
-
Map a DNS name to the public IP:
Write-Host "Setting fully qualified domain name (FQDN)..." $publicIpId = az resource list --name $publicIpName --query [0].id -o tsv az network public-ip update --ids $publicIpId --dns-name $dnsName Write-Host "FQDN:" az network public-ip list --resource-group $clusterResourceGroupName --query "[?name=='$publicIpName'].[dnsSettings.fqdn]" -o tsv
-
Copy the domain name, we will need it in the next step.
-
Verify the installation. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running:
kubectl get service -l app=nginx-ingress --namespace ingress-basic -w
Now you should get "default backend - 404" when sending a request to either IP or Domain name.
References: Create an ingress controller with a static public IP
Event Grid Web Hook which we'll be configuring later has to be HTTPS and self-signed certificates are not supported, it needs to be from a certificate authority. We will be using the cert-manager project to automatically generate and configure Let's Encrypt certificates.
-
Install the CustomResourceDefinition resources:
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13/deploy/manifests/00-crds.yaml
-
Label the cert-manager namespace to disable resource validation:
kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
-
Add the Jetstack Helm repository:
helm repo add jetstack https://charts.jetstack.io
-
Install the cert-manager Helm chart:
helm repo update helm install cert-manager --namespace ingress-basic --version v0.13.0 jetstack/cert-manager
-
Verify the installation:
kubectl get pods --namespace ingress-basic
You should see the cert-manager, cert-manager-cainjector, and cert-manager-webhook pod in a Running state. It may take a minute or so for the TLS assets required for the webhook to function to be provisioned. This may cause the webhook to take a while longer to start for the first time than other pods
https://cert-manager.io/docs/installation/kubernetes/
. -
Set your email in deploy/cluster-issuer.yaml and run:
kubectl apply -f .\deploy\cluster-issuer.yaml --namespace ingress-basic
-
Set your FQDN in deploy/ingress.yaml and run:
kubectl apply -f .\deploy\ingress.yaml
Cert-manager has likely automatically created a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. If not, follow this tutorial to create a certificate object.
-
To test, run:
kubectl describe certificate tls-secret
The output should be similar to this and your connection should now be secure (the certificate issue part might take a few minutes):
Type Reason Age From Message Normal GeneratedKey 98s cert-manager Generated a new private key Normal Requested 98s cert-manager Created new CertificateRequest resource "tls-secret-**********" Normal Issued 74s cert-manager Certificate issued successfully
References: Configure certificates for HTTPS
Run this script to execute steps 1 through 5 or follow the steps below.
-
Initialize variables:
$resourceGroupName = "<resource-group-name>" $dbAccountName = "<db-account-name>" $dbName = "IcecreamDB" $containerName = "Orders"
-
Create an Azure Cosmos DB database account:
az cosmosdb create --name $dbAccountName --resource-group $resourceGroupName
-
Create a database:
az cosmosdb sql database create --account-name $dbAccountName --resource-group $resourceGroupName --name $dbName
-
Create Orders container:
az cosmosdb sql container create ` --account-name $dbAccountName ` --database-name $dbName ` --name $containerName ` --partition-key-path "/id" ` --resource-group $resourceGroupName
-
List AccountEndpoint and AccountKey:
az cosmosdb keys list -g $resourceGroupName --name $dbAccountName --type connection-strings
-
Copy AccountEndpoint and AccountKey from the output.
-
Update the yaml file with DB account endpoint, DB key, database and container name deploy/cosmosdb-orders.yaml.
Run this script to execute steps 1 through 2 or follow the steps below.
-
Install Redis in your cluster:
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis
-
Get Redis password (Windows) (see the References section on how to get your password for Linux/MacOS).
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64 certutil -decode encoded.b64 password.txt
-
Copy the password from password.txt and delete the two files: password.txt and encoded.b64.
-
Set Redis password in deploy/statestore.yaml.
References:
Run this script to execute steps 1 through 4 or follow the steps below.
-
Initialize variables. Service Bus namespace name should follow these rules:
$resourceGroupName = "<resource-group-name>" $namespaceName = "<service-bus-namespace-name>"
-
Create Service Bus namespace:
az servicebus namespace create ` --name $namespaceName ` --resource-group $resourceGroupName ` --location <location> ` --sku Standard
-
Create topic:
az servicebus topic create --name batchreceived ` --namespace-name $namespaceName ` --resource-group $resourceGroupName
-
Get the connection string for the namespace:
$connectionString=$(az servicebus namespace authorization-rule keys list --resource-group $resourceGroupName --namespace-name $namespaceName --name RootManageSharedAccessKey --query primaryConnectionString --output tsv) Write-Host "Connection String:" $connectionString
-
Replace <namespace_connection_string> in deploy/messagebus.yaml with your connection string.
References:
Run this script to execute steps 1 through 2 or follow the steps below.
-
Add App Insights extension to Azure CLI:
az extension add -n application-insights
-
Create an App Insights resource:
az monitor app-insights component create ` --app <app-insight-resource-name> ` --location <location> ` --resource-group <resource-group-name>
-
Copy the value of the instrumentationKey, we will need it later
-
Open the deployment file and set the Instrumentation Key value.
-
Deploy the LocalForwarder to your cluster.
kubectl apply -f .\deploy\localforwarder-deployment.yaml
-
Deploy the dapr tracing configuration:
kubectl apply -f .\deploy\dapr-tracing.yaml
-
Deploy the exporter:
kubectl apply -f .\deploy\dapr-tracing-exporter.yaml
References: Create an Application Insights resource
Run this script to execute steps 1 through 4 or follow the steps below.
-
Deploy KEDA:
helm repo add kedacore https://kedacore.github.io/charts helm repo update kubectl create namespace keda helm install keda kedacore/keda --namespace keda
-
Initialize variables:
$resourceGroupName = "<resource-group-name>" $namespaceName = "<service-bus-namespace-name>"
-
Create Authorization Rule for 'batchreceived' topic:
az servicebus topic authorization-rule create --resource-group $resourceGroupName --namespace-name $namespaceName --topic-name batchreceived --name kedarule --rights Send Listen Manage
-
Get the connection string and create a base64 representation of the connection string.
$primaryConnectionString = az servicebus topic authorization-rule keys list --name kedarule --resource-group $resourceGroupName --namespace-name $namespaceName --topic-name batchreceived --query primaryConnectionString --output tsv Write-Host "base64 representation of the connection string:" [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($primaryConnectionString))
-
Replace
<your-base64-encoded-connection-string>
in deploy/batch-processor-keda.yaml file.
References:
-
Create an Azure Container Registry (ACR) (Lowercase registry name is recommended to avoid warnings):
az acr create --resource-group <resource-group-name> --name <acr-name> --sku Basic
Take note of loginServer in the output.
-
Integrate an existing ACR with existing AKS clusters:
az aks update -n <cluster-name> -g <resource-group-name> --attach-acr <acr-name>
-
Change ACR loginServer and name in the following scripts and run them. They will build an image for each microservice and push it to the registry:
-
Update the following files with your registry loginServer:
References: Create a private container registry using the Azure CLI
-
Deploy Dapr components:
kubectl apply -f .\deploy\statestore.yaml kubectl apply -f .\deploy\cosmosdb-orders.yaml kubectl apply -f .\deploy\messagebus.yaml kubectl apply -f .\deploy\blob-storage.yaml
-
Deploy Batch Receiver microservice:
kubectl apply -f .\deploy\batch-receiver.yaml
Check the logs for batch-receiver. You should see "Batch Receiver listening on port 3000!".
-
Subscribe to the Blob Storage.
Now we need to subscribe to a topic to tell Event Grid which events we want to track, and where to send the events. batch-receiver microservice should already be running to send back a validation code.
-
Run this script to create the subscription or follow the steps below.
CLI:
az eventgrid event-subscription create ` --source-resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageaccounts/<storage-account-name>" ` --name blob-created ` --endpoint-type webhook ` --endpoint https://<FQDN>/api/blobAddedHandler ` --included-event-types Microsoft.Storage.BlobCreated
Portal:
- In the portal, navigate to your Azure Storage account that you created earlier.
- On the Storage account page, select Events on the left menu.
- Create new Event Subscription:
- Enter a name for the event subscription.
- Select Blob Created event in the Event Types drop-down.
- Select Web Hook for Endpoint type.
- Select an endpoint where you want your events to be sent to (
https://<FQDN>/api/blobAddedHandler
).
-
Check the logs for batch-receiver. You should see that a subscription validation event has been received along with a validation code.
References: Subscribe to the Blob storage
-
-
Deploy Batch Generator and Batch Processor microservices:
kubectl apply -f .\deploy\batch-generator.yaml kubectl apply -f .\deploy\batch-processor-keda.yaml
-
Now you can check the logs of the Batch Receiver and see it starts getting events from blobs being created. Once a batch has all 3 files, it puts a message into pub/sub.
kubectl logs <batch-receiver-pod-name> batch-receiver
-
Now you can check the logs of the Batch Processor and see it receives the message, processes the batch and stores orders into Cosmos DB.