- AKS201 - Provision, bind and consume Azure Database for MySQL using OSBA
In this module, you will provision and bind Azure Database for MySQL, then configure the voting app to consume Azure Database for MySQL using Open Service Broker for Azure (OSBA).
Open Service Broker for Azure (OSBA) is the open source, Open Service Broker-compatible API server that provisions managed services in Azure. As prerequisites, you need to install Service Catalog onto your Kubernetes cluster.
If you haven't installed Helm CLI and Tiller, please go through the following 2 steps:
- Install Helm CLI (only if not installed yet)
- Create a service account (only for RBAC-enabled AKS cluster)
Add the Service Catalog chart to the Helm repository
helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
Grant Helm permission to admin your cluster, so it can install service-catalog:
kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Install service catalog with Helm chart
helm install svc-cat/catalog \
--name catalog \
--namespace catalog
Check if it's deployed. echo "Wait until servicecatalog appears in the output of "kubectl get apiservice""
helm ls
# or 'helm ls --all catalog'
Verify that servicecatalog
appears in the output of "kubectl get apiservice"
kubectl get apiservice -w
Sample Output:
NAME AGE
v1. 21d
v1.apps 21d
v1.authentication.k8s.io 21d
v1.authorization.k8s.io 21d
v1.autoscaling 21d
v1.batch 21d
v1.networking.k8s.io 21d
v1.rbac.authorization.k8s.io 21d
v1.storage.k8s.io 21d
v1alpha1.admissionregistration.k8s.io 16d
v1alpha2.config.istio.io 21d
v1beta1.admissionregistration.k8s.io 21d
v1beta1.apiextensions.k8s.io 21d
v1beta1.apps 21d
v1beta1.authentication.k8s.io 21d
v1beta1.authorization.k8s.io 21d
v1beta1.batch 21d
v1beta1.certificates.k8s.io 21d
v1beta1.events.k8s.io 21d
v1beta1.extensions 21d
v1beta1.policy 21d
v1beta1.rbac.authorization.k8s.io 21d
v1beta1.servicecatalog.k8s.io 4d <<<< This one!!
v1beta1.storage.k8s.io 21d
v1beta2.apps 21d
v2beta1.autoscaling 21d
In additioin, check Service Catalog Pods' running status:
kubectl get pods --namespace catalog
(Output Example)
NAME READY STATUS RESTARTS AGE
po/catalog-catalog-apiserver-5999465555-9hgwm 2/2 Running 4 9d
po/catalog-catalog-controller-manager-554c758786-f8qvc 1/1 Running 11 9d
Refer to Install Service Catalog for more detail on Service Catalog installation.
Service Catalog (svcat) CLI is very useful CLI tool in managing Service Catalog.
curl -sLO https://servicecatalogcli.blob.core.windows.net/cli/latest/$(uname -s)/$(uname -m)/svcat
# Make the binary executable
chmod +x ./svcat
# Move the binary to a directory on your PATH (e.g., $HOME/bin)
mv svcat $HOME/bin
Refer to Installing the Service Catalog CLI for more detail on Service Catalog CLI installation.
Get your service principal that you created in the preparation step and use them for either AZURE_CLIENT_ID
, AZURE_CLIENT_SECRET
, AZURE_TENANT_ID
variable value below, then install OSBA using Helm chart like this below:
AZURE_CLIENT_ID='Your Service Principal Client ID'
AZURE_CLIENT_SECRET='Your Service Principal Client Secret'
AZURE_TENANT_ID='Your Service Principal Tenant ID'
AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv | sed -e "s/[\r\n]\+//g" )
helm repo add azure https://kubernetescharts.blob.core.windows.net/azure
helm install azure/open-service-broker-azure --name osba --namespace osba \
--set azure.subscriptionId=$AZURE_SUBSCRIPTION_ID \
--set azure.tenantId=$AZURE_TENANT_ID \
--set azure.clientId=$AZURE_CLIENT_ID \
--set azure.clientSecret=$AZURE_CLIENT_SECRET
Check if OSBA Pods are ready and running:
kubectl get pods --namespace osba -w
(Output Example)
NAME READY STATUS RESTARTS AGE
po/osba-azure-service-broker-8495bff484-7ggj6 1/1 Running 0 9d
po/osba-redis-5b44fc9779-hgnck
Refer to Install Open Service Broker for Azure for more detail on OSBA installation
First of all, open kubernetes-manifests/vote-sb/mysql-instance.yaml
and make sure if location
and resourceGroup
parameters are the same as the one you careated for AKS cluster
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: my-osba-mysql-instance
namespace: default
spec:
clusterServiceClassExternalName: azure-mysql-5-7
clusterServicePlanExternalName: general-purpose
parameters:
location: japaneast <<< HERE
resourceGroup: RG_azconlab <<< HERE
sslEnforcement: disabled
firewallRules:
- name: "AllowFromAzure"
startIPAddress: "0.0.0.0"
endIPAddress: "0.0.0.0"
#- name: AllowAll
# startIPAddress: 0.0.0.0
# endIPAddress: 255.255.255.255
cores: 2
storage: 5
backupRetention: 7
Then, run the following command to create MySQL instance
$ kubectl create -f kubernetes/vote-sb/mysql-instance.yaml
You can get MySQL's provisioning status via svcat
command:
$ svcat get instances
NAME NAMESPACE CLASS PLAN STATUS
+------------------------+-----------+-----------------+-----------------+--------+
my-osba-mysql-instance default azure-mysql-5-7 general-purpose Provisitoning
The status above is Provisioning
. Please wait until the status changes to Ready
.
Here is the YAML file to bind the instance of the MySQL Service - kubernetes-manifests/vote-sb/mysql-binding.yaml
.
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: my-osba-mysql-binding
namespace: default
spec:
instanceRef:
name: my-osba-mysql-instance
secretName: my-osba-mysql-secret
Run the following command to bind MySQL instance
$ kubectl create -f kubernetes/vote-sb/mysql-binding.yaml
After binding, the final step involves mapping the connection credentials and service-specific information into the application. These pieces of information are stored in secret (named
my-osba-mysql-secret
in this case) that the application in the cluster can access and use to connect directly with the managed service. To understand more, please dump the secret and see all credential information stored in the secret:$ kubectl get secret my-osba-mysql-secret -o yaml apiVersion: v1 data: database: Y3hjbGJyYWIzYw== host: YTdkNjhkOTUtYzY0My00MTI1LWIyOTUtMTMxODMyZTMxYmI2Lm15c3FsLmRhdGFiYXNlLmF6dXJlLmNvbQ== password: encwVnF6V0lpbEtQU256TQ== port: MzMwNg== sslRequired: ZmFsc2U= tags: WyJteXNxbCJd uri: bXlzcWw6Ly9lb2swOWd2aWRoJTQwYTdkNjhkOTUtYzY0My00MTI1LWIyOTUtMTMxODMyZTMxYmI2Onp3MFZxeldJaWxLUFNuek1AYTdkNjhkOTUtYzY0My00MTI1LWIyOTUtMTMxODMyZTMxYmI2Lm15c3FsLmRhdGFiYXNlLmF6dXJlLmNvbTozMzA2L2N4Y2xicmFiM2M/dXNlU1NMPXRydWUmcmVxdWlyZVNTTD10cnVl username: ZW9rMDlndmlkaEBhN2Q2OGQ5NS1jNjQzLTQxMjUtYjI5NS0xMzE4MzJlMzFiYjY= kind: Secret
Run the following command to initialize database for the Voting app
# scripts/init-azure-mysql-table.sh <osba-secret-name>
$ scripts/init-azure-mysql-table.sh my-osba-mysql-secret
Delete all resources that has label app=azure-voting-app
$ kubectl delete svc,deploy,pvc,sc,secrets,cm,ingress -l app=azure-voting-app
$ kubectl apply -f kubernetes-manifests/vote-sb/configmap.yaml
configmap "azure-voting-app-config" created
Get ConfigMap list with the following command and confirm that azure-voting-app-config
configmap resource is in the list
$ kubectl get configmap
NAME DATA AGE
azure-voting-app-config 1 50s
Create Deployment resource with the following command
$ kubectl apply -f kubernetes-manifests/vote-sb/deployment.yaml
deployment "azure-voting-app-back" created
deployment "azure-voting-app-front" created
Get Pod info list and confirm that all created Pods' status are Running
kubectl get pod -w
NAME READY STATUS RESTARTS AGE
azure-voting-app-back-75b9bbc874-8wx6p 0/1 ContainerCreating 0 1m
azure-voting-app-front-86694fdcb4-5jjsm 0/1 ContainerCreating 0 1m
azure-voting-app-front-86694fdcb4-t6pg6 0/1 ContainerCreating 0 1m
azure-voting-app-back-75b9bbc874-8wx6p 1/1 Running 0 1m
azure-voting-app-front-86694fdcb4-5jjsm 1/1 Running 0 2m
azure-voting-app-front-86694fdcb4-t6pg6 1/1 Running 0 2m
Option
-w
can watch for changes after listing/getting the requested objects
Get Deployment info list and confirm that the number of DESIRED
and AVAILABLE
is same.
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
azure-voting-app-back 1 1 1 1 1m
azure-voting-app-front 2 2 2 2 1m
Create Service resource with the following command
$ kubectl apply -f kubernetes-manifests/vote-sb/service.yaml
service "azure-voting-app-front" created
[NOTE] In this case, service type is defined as
ClusterIP
, NOTEXTERNAL-IP
Open kubernetes-manifests/vote/ingress.yaml
and replace <CLUSTER_SPECIFIC_DNS_ZONE>
with the DNS zone that you can obtained from DNS zone resource created in the auto-created AKS resource group named MC_<ResourceGroup>_<ClusterName>_<region>
. Please refer to Ingress01: Setup HTTP Application Routing to know more about how to get DNS Zone name.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: azure-voting-app
labels:
app: azure-voting-app
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: vote.<CLUSTER_SPECIFIC_DNS_ZONE>
http:
paths:
- backend:
serviceName: azure-voting-app-front
servicePort: 80
path: /
Then, deploy the ingress
$ kubectl apply -f kubernetes-manifests/vote-sb/ingress.yaml
ingress.extensions/azure-voting-app created
Check if the ingress is actually created
$ kubectl get ingress -w
NAME HOSTS ADDRESS PORTS AGE
azure-vote vote.f7418ec8af894af8a2ab.eastus.aksapp.io 80 1m
Finally, you can access the app with the URL - http://vote.<CLUSTER_SPECIFIC_DNS_ZONE>
- Installing Helm
- Install Service Catalog
- Install Open Service Broker for Azure
- Parametes of Broker for Azure Database for MySQL
- Azure Database for MySQL