diff --git a/README.md b/README.md index 075b14076..a3d726ba2 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,11 @@ -# Kubernetes The Hard Way +# Kubernetes The Hard Way on Azure -This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/). +This tutorial is designed for [Microsoft Azure](https://azure.microsoft.com) and [Azure CLI 2.0](https://github.com/azure/azure-cli). +It is a fork of the great [Kubernets The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) from [Kesley Hightower](https://twitter.com/kelseyhightower) that describes same steps using [Google Cloud Platform](https://cloud.google.com). + +Azure part is based on the superb translation done by [Jonathan Carter - @lostintangent](https://twitter.com/LostInTangent) in this [fork](https://github.com/lostintangent/kubernetes-the-hard-way). He is the real man behind the Azure "translation". + +This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Azure Container Services](https://azure.microsoft.com/en-us/services/container-service), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides). Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster. @@ -14,14 +19,14 @@ The target audience for this tutorial is someone planning to support a productio Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication. -* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.7.4 +* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.7.5 * [CRI-O Container Runtime](https://github.com/kubernetes-incubator/cri-o) v1.0.0-beta.0 * [CNI Container Networking](https://github.com/containernetworking/cni) v0.6.0 -* [etcd](https://github.com/coreos/etcd) 3.2.6 +* [etcd](https://github.com/coreos/etcd) 3.2.7 ## Labs -This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. +This tutorial assumes you have access to the [Microsoft Azure](https://azure.microsoft.com). While Azure is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. * [Prerequisites](docs/01-prerequisites.md) * [Installing the Client Tools](docs/02-client-tools.md) diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 0bb2ba889..51ab91ef0 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -1,41 +1,33 @@ # Prerequisites -## Google Cloud Platform +## Microsoft Azure -This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits. +This tutorial leverages the [Microsoft Azure](https://azure.microsoft.com) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://azure.microsoft.com/en-us/free/) for $200 in free credits. -[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day). +[Estimated cost](https://azure.microsoft.com/en-us/pricing/calculator/) to run this tutorial: $0.4 per hour ($10 per day). -> The compute resources required for this tutorial exceed the Google Cloud Platform free tier. +> The compute resources required for this tutorial will not exceed the Microsoft Azure free tier. -## Google Cloud Platform SDK +## Microsoft Azure Cloud Platform SDK -### Install the Google Cloud SDK +### Install the Microsoft Azure CLI 2.0 -Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility. +Follow the Microsoft Azure CLI 2.0 [documentation](https://github.com/azure/azure-cli#installation) to install and configure the `az` command line utility. -Verify the Google Cloud SDK version is 169.0.0 or higher: +Verify the Microsoft Azure CLI 2.0 version is 2.0.14 or higher: +```shell +az --version ``` -gcloud version -``` - -### Set a Default Compute Region and Zone -This tutorial assumes a default compute region and zone have been configured. +### Create a default Resource Group in a location -Set a default compute region: +The guide assumes you've installed the [Azure CLI 2.0](https://github.com/azure/azure-cli#installation), and will be creating resources in the `westus2` location, within a resource group named `kubernetes`. To create this resource group, simply run the following command: -``` -gcloud config set compute/region us-west1 -``` - -Set a default compute zone: - -``` -gcloud config set compute/zone us-west1-c +```shell +az group create -n kubernetes -l westus2 ``` -> Use the `gcloud compute zones list` command to view additional regions and zones. +> Use the `az account list-locations` command to view additional locations. Next: [Installing the Client Tools](02-client-tools.md) diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index ba681dcb9..edfcfd1b6 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -2,7 +2,6 @@ In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). - ## Install CFSSL The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. @@ -11,41 +10,41 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/ ### OS X -``` +```shell wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 ``` -``` +```shell chmod +x cfssl_darwin-amd64 cfssljson_darwin-amd64 ``` -``` +```shell sudo mv cfssl_darwin-amd64 /usr/local/bin/cfssl ``` -``` +```shell sudo mv cfssljson_darwin-amd64 /usr/local/bin/cfssljson ``` ### Linux -``` +```shell wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 ``` -``` +```shell chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 ``` -``` +```shell sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl ``` -``` +```shell sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson ``` @@ -53,19 +52,31 @@ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson Verify `cfssl` version 1.2.0 or higher is installed: -``` +```shell cfssl version ``` +If this step fails with a runtime error, try installing cfssl following instructions on [CloudFlare's repository](https://github.com/cloudflare/cfssl#installation) + > output -``` +```shell Version: 1.2.0 Revision: dev -Runtime: go1.6 +Runtime: go1.9 ``` -> The cfssljson command line utility does not provide a way to print its version. +```shell +cfssljson -version +``` + +> output + +```shell +Version: 1.2.0 +Revision: dev +Runtime: go1.9 +``` ## Install kubectl @@ -73,44 +84,44 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S ### OS X -``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/darwin/amd64/kubectl +```shell +wget https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/darwin/amd64/kubectl ``` -``` +```shell chmod +x kubectl ``` -``` +```shell sudo mv kubectl /usr/local/bin/ ``` ### Linux -``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl +```shell +wget https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubectl ``` -``` +```shell chmod +x kubectl ``` -``` +```shell sudo mv kubectl /usr/local/bin/ ``` ### Verification -Verify `kubectl` version 1.7.4 or higher is installed: +Verify `kubectl` version 1.7.5 or higher is installed: -``` +```shell kubectl version --client ``` > output -``` -Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} +```shell +Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-09-02T18:55:00Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"} ``` Next: [Provisioning Compute Resources](03-compute-resources.md) diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index d13e20288..ff6d18b0a 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -1,8 +1,8 @@ # Provisioning Compute Resources -Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones). - -> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab. +Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster within a single [Resource Group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups) in a single [region](https://azure.microsoft.com/en-us/regions/) +Create a default Resource Group in a region +> Ensure a resource group has been created as described in the [Prerequisites](01-prerequisites.md#create-a-deafult-resource-group-in-a-region) lab. ## Networking @@ -10,117 +10,146 @@ The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-ad > Setting up network policies is out of scope for this tutorial. -### Virtual Private Cloud Network +### Virtual Network -In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster. +In this section a dedicated [Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview) (VNet) network will be setup to host the Kubernetes cluster. -Create the `kubernetes-the-hard-way` custom VPC network: +Create the `kubernetes-vnet` custom VNet network with a subnet `kubernetes` provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.: -``` -gcloud compute networks create kubernetes-the-hard-way --mode custom -``` - -A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. - -Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: - -``` -gcloud compute networks subnets create kubernetes \ - --network kubernetes-the-hard-way \ - --range 10.240.0.0/24 +```shell +az network vnet create -g kubernetes \ + -n kubernetes-vnet \ + --address-prefix 10.240.0.0/16 \ + --subnet-name kubernetes-subnet ``` > The `10.240.0.0/24` IP address range can host up to 254 compute instances. ### Firewall Rules -Create a firewall rule that allows internal communication across all protocols: +Create a firewall ([Network Security Group](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg)) and assign it to the subnet: +```shell +az network nsg create -g kubernetes -n kubernetes-nsg ``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ - --allow tcp,udp,icmp \ - --network kubernetes-the-hard-way \ - --source-ranges 10.240.0.0/24,10.200.0.0/16 -``` - -Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ - --allow tcp:22,tcp:6443,icmp \ - --network kubernetes-the-hard-way \ - --source-ranges 0.0.0.0/0 +```shell +az network vnet subnet update -g kubernetes \ + -n kubernetes-subnet \ + --vnet-name kubernetes-vnet \ + --network-security-group kubernetes-nsg ``` -Create a firewall rule that allows health check probes from the GCP [network load balancer IP ranges](https://cloud.google.com/compute/docs/load-balancing/network/#firewall_rules_and_network_load_balancing): +Create a firewall rule that allows external SSH and HTTPS: +```shell +az network nsg rule create -g kubernetes \ + -n kubernetes-allow-ssh \ + --access allow \ + --destination-address-prefix '*' \ + --destination-port-range 22 \ + --direction inbound \ + --nsg-name kubernetes-nsg \ + --protocol tcp \ + --source-address-prefix '*' \ + --source-port-range '*' \ + --priority 1000 ``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-checks \ - --allow tcp:8080 \ - --network kubernetes-the-hard-way \ - --source-ranges 209.85.204.0/22,209.85.152.0/22,35.191.0.0/16 + +```shell +az network nsg rule create -g kubernetes \ + -n kubernetes-allow-api-server \ + --access allow \ + --destination-address-prefix '*' \ + --destination-port-range 6443 \ + --direction inbound \ + --nsg-name kubernetes-nsg \ + --protocol tcp \ + --source-address-prefix '*' \ + --source-port-range '*' \ + --priority 1001 ``` -> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. +> An [external load balancer](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview) will be used to expose the Kubernetes API Servers to remote clients. -List the firewall rules in the `kubernetes-the-hard-way` VPC network: +List the firewall rules in the `kubernetes-vnet` VNet network: -``` -gcloud compute firewall-rules list --filter "network kubernetes-the-hard-way" +```shell +az network nsg rule list -g kubernetes --nsg-name kubernetes-nsg --query "[].{Name:name, \ + Direction:direction, Priority:priority, Port:destinationPortRange}" -o table ``` > output -``` -NAME NETWORK DIRECTION PRIORITY ALLOW DENY -kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp -kubernetes-the-hard-way-allow-health-checks kubernetes-the-hard-way INGRESS 1000 tcp:8080 -kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp +```shell +Name Direction Priority Port +--------------------------- ----------- ---------- ------ +kubernetes-allow-ssh Inbound 1000 22 +kubernetes-allow-api-server Inbound 1001 6443 ``` ### Kubernetes Public IP Address Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` -gcloud compute addresses create kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) +```shell +az network lb create -g kubernetes \ + -n kubernetes-lb \ + --backend-pool-name kubernetes-lb-pool \ + --public-ip-address kubernetes-pip \ + --public-ip-address-allocation static ``` -Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: +Verify the `kubernetes-pip` static IP address was created correctly in the `kubernetes` Resource Group and chosen region: -``` -gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" +```shell +az network public-ip list --query="[?name=='kubernetes-pip'].{ResourceGroup:resourceGroup, \ + Region:location,Allocation:publicIpAllocationMethod,IP:ipAddress}" -o table ``` > output -``` -NAME REGION ADDRESS STATUS -kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED +```shell +ResourceGroup Region Allocation IP +--------------- -------- ------------ -------------- +kubernetes westus2 Static XX.XXX.XXX.XXX ``` -## Compute Instances +## Virtual Machines The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [CRI-O container runtime](https://github.com/kubernetes-incubator/cri-o). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process. ### Kubernetes Controllers -Create three compute instances which will host the Kubernetes control plane: +Create three compute instances which will host the Kubernetes control plane in `controller-as` [Availability Set](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/regions-and-availability#availability-sets): +```shell +az vm availability-set create -g kubernetes -n controller-as ``` + +```shell for i in 0 1 2; do - gcloud compute instances create controller-${i} \ - --async \ - --boot-disk-size 200GB \ - --can-ip-forward \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --machine-type n1-standard-1 \ - --private-network-ip 10.240.0.1${i} \ - --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ - --subnet kubernetes \ - --tags kubernetes-the-hard-way,controller + echo "[Controller ${i}] Creating public IP..." + az network public-ip create -n controller-${i}-pip -g kubernetes > /dev/null + + echo "[Controller ${i}] Creating NIC..." + az network nic create -g kubernetes \ + -n controller-${i}-nic \ + --private-ip-address 10.240.0.1${i} \ + --public-ip-address controller-${i}-pip \ + --vnet kubernetes-vnet \ + --subnet kubernetes-subnet \ + --ip-forwarding \ + --lb-name kubernetes-lb \ + --lb-address-pools kubernetes-lb-pool > /dev/null + + echo "[Controller ${i}] Creating VM..." + az vm create -g kubernetes \ + -n controller-${i} \ + --image Canonical:UbuntuServer:16.04.0-LTS:latest \ + --nics controller-${i}-nic \ + --availability-set controller-as \ + --nsg '' > /dev/null done ``` @@ -128,45 +157,58 @@ done Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime. -> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets. +> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.240.0.0/16`, which supports 254 subnets. -Create three compute instances which will host the Kubernetes worker nodes: +Create three compute instances which will host the Kubernetes worker nodes in `worker-as` Availability Set: +```shell +az vm availability-set create -g kubernetes -n worker-as ``` + +```shell for i in 0 1 2; do - gcloud compute instances create worker-${i} \ - --async \ - --boot-disk-size 200GB \ - --can-ip-forward \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --machine-type n1-standard-1 \ - --metadata pod-cidr=10.200.${i}.0/24 \ - --private-network-ip 10.240.0.2${i} \ - --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ - --subnet kubernetes \ - --tags kubernetes-the-hard-way,worker - done + echo "[Worker ${i}] Creating public IP..." + az network public-ip create -n worker-${i}-pip -g kubernetes > /dev/null + + echo "[Worker ${i}] Creating NIC..." + az network nic create -g kubernetes \ + -n worker-${i}-nic \ + --private-ip-address 10.240.0.2${i} \ + --public-ip-address worker-${i}-pip \ + --vnet kubernetes-vnet \ + --subnet kubernetes-subnet \ + --ip-forwarding > /dev/null + + echo "[Worker ${i}] Creating VM..." + az vm create -g kubernetes \ + -n worker-${i} \ + --image Canonical:UbuntuServer:16.04.0-LTS:latest \ + --nics worker-${i}-nic \ + --tags pod-cidr=10.200.${i}.0/24 \ + --availability-set worker-as \ + --nsg '' > /dev/null +done ``` ### Verification List the compute instances in your default compute zone: -``` -gcloud compute instances list +```shell +az vm list -d -g kubernetes -o table ``` > output -``` -NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS -controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING -controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING -controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING -worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING -worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING -worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING +```shell +Name ResourceGroup PowerState PublicIps Location +------------ --------------- ------------ -------------- ---------- +controller-0 kubernetes VM running XX.XXX.XXX.XXX westus2 +controller-1 kubernetes VM running XX.XXX.XXX.XXX westus2 +controller-2 kubernetes VM running XX.XXX.XXX.XXX westus2 +worker-0 kubernetes VM running XX.XXX.XXX.XXX westus2 +worker-1 kubernetes VM running XX.XXX.XXX.XXX westus2 +worker-2 kubernetes VM running XX.XXX.XXX.XXX westus2 ``` Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 72293563b..864940180 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Create the CA configuration file: -``` +```shell cat > ca-config.json < ca-csr.json < admin-csr.json < ${instance}-csr.json < ${instance}-csr.json < kube-proxy-csr.json < kubernetes-csr.json < encryption-config.yaml < etcd.service < output -``` +```shell 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index db64cca95..240d5329d 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -4,10 +4,14 @@ In this lab you will bootstrap the Kubernetes control plane across three compute ## Prerequisites -The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: +The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `az` command to find its public IP and ssh to it. Example: -``` -gcloud compute ssh controller-0 +```shell +CONTROLLER="controller-0" +PUBLIC_IP_ADDRESS=$(az network public-ip show -g kubernetes \ + -n ${CONTROLLER}-pip --query "ipAddress" -otsv) + +ssh $(whoami)@${PUBLIC_IP_ADDRESS} ``` ## Provision the Kubernetes Control Plane @@ -16,44 +20,43 @@ gcloud compute ssh controller-0 Download the official Kubernetes release binaries: -``` +```shell wget -q --show-progress --https-only --timestamping \ - "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-apiserver" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-controller-manager" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-scheduler" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl" + "https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kube-apiserver" \ + "https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kube-controller-manager" \ + "https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kube-scheduler" \ + "https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubectl" ``` Install the Kubernetes binaries: -``` +```shell chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl ``` -``` +```shell sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ ``` ### Configure the Kubernetes API Server -``` +```shell sudo mkdir -p /var/lib/kubernetes/ ``` -``` +```shell sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/ ``` The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: -``` -INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ - http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) +```shell +INTERNAL_IP=$(ip addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}') ``` Create the `kube-apiserver.service` systemd unit file: -``` +```shell cat > kube-apiserver.service < kube-controller-manager.service < kube-scheduler.service < output -``` +```shell { "major": "1", "minor": "7", - "gitVersion": "v1.7.4", - "gitCommit": "793658f2d7ca7f064d2bdf606519f9fe1229c381", + "gitVersion": "v1.7.5", + "gitCommit": "17d7182a7ccbb167074be7a87f0a68bd00d58d97", "gitTreeState": "clean", - "buildDate": "2017-08-17T08:30:51Z", + "buildDate": "2017-08-31T08:56:23Z", "goVersion": "go1.8.3", "compiler": "gc", "platform": "linux/amd64" diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index c38761fa6..711d47043 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -4,10 +4,29 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp ## Prerequisites -The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: +The commands in this lab must be run on each controller instance: `worker-0`, `worker-1`, and `worker-2`. +Azure Metadata Instace service cannot be used to set custom property. We have used *tags* on each worker VM to defined POD-CIDR used later. +Retrieve the POD CIDR range for the current compute instance and keep it for later. + +```shell +az vm show -g kubernetes --name worker-0 --query "tags" -o tsv +``` + +> output + +```shell +10.200.0.0/24 ``` -gcloud compute ssh worker-0 + +Login to each worker instance using the `az` command to find its public IP and ssh to it. Example: + +```shell +CONTROLLER="worker-0" +PUBLIC_IP_ADDRESS=$(az network public-ip show -g kubernetes \ + -n ${CONTROLLER}-pip --query "ipAddress" -otsv) + +ssh $(whoami)@${PUBLIC_IP_ADDRESS} ``` ## Provisioning a Kubernetes Worker Node @@ -16,35 +35,35 @@ gcloud compute ssh worker-0 Add the `alexlarsson/flatpak` [PPA](https://launchpad.net/ubuntu/+ppas) which hosts the `libostree` package: -``` +```shell sudo add-apt-repository -y ppa:alexlarsson/flatpak ``` -``` +```shell sudo apt-get update ``` Install the OS dependencies required by the cri-o container runtime: -``` +```shell sudo apt-get install -y socat libgpgme11 libostree-1-1 ``` ### Download and Install Worker Binaries -``` +```shell wget -q --show-progress --https-only --timestamping \ https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \ https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \ - https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \ - https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \ - https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet + https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubectl \ + https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kube-proxy \ + https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubelet ``` Create the installation directories: -``` +```shell sudo mkdir -p \ /etc/containers \ /etc/cni/net.d \ @@ -59,43 +78,36 @@ sudo mkdir -p \ Install the worker binaries: -``` +```shell sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ ``` -``` +```shell tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz ``` -``` +```shell chmod +x kubectl kube-proxy kubelet runc.amd64 ``` -``` +```shell sudo mv runc.amd64 /usr/local/bin/runc ``` -``` +```shell sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/ ``` -``` +```shell sudo mv conmon pause /usr/local/libexec/crio/ ``` - ### Configure CNI Networking -Retrieve the Pod CIDR range for the current compute instance: - -``` -POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ - http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) -``` +Create the `bridge` network configuration file replacing POD_CIDR with address retrieved initially from Azure VM tags: -Create the `bridge` network configuration file: - -``` +```shell +POD_CIDR="10.200.0.0/24" cat > 10-bridge.conf < 99-loopback.conf < crio.service < kubelet.service < kube-proxy.service < output -``` +```shell NAME STATUS AGE VERSION -worker-0 Ready 5m v1.7.4 -worker-1 Ready 3m v1.7.4 -worker-2 Ready 7s v1.7.4 +worker-0 Ready 5m v1.7.5 +worker-1 Ready 3m v1.7.5 +worker-2 Ready 7s v1.7.5 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index 3d02dd365..2c8897dce 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,34 +10,33 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Retrieve the `kubernetes-the-hard-way` static IP address: -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') +```shell +KUBERNETES_PUBLIC_ADDRESS=$(az network public-ip show -g kubernetes \ + -n kubernetes-pip --query ipAddress -otsv) ``` Generate a kubeconfig file suitable for authenticating as the `admin` user: -``` +```shell kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 ``` -``` +```shell kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem ``` -``` +```shell kubectl config set-context kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \ --user=admin ``` -``` +```shell kubectl config use-context kubernetes-the-hard-way ``` @@ -45,34 +44,34 @@ kubectl config use-context kubernetes-the-hard-way Check the health of the remote Kubernetes cluster: -``` +```shell kubectl get componentstatuses ``` > output -``` +```shell NAME STATUS MESSAGE ERROR -controller-manager Healthy ok -scheduler Healthy ok -etcd-2 Healthy {"health": "true"} -etcd-0 Healthy {"health": "true"} -etcd-1 Healthy {"health": "true"} +controller-manager Healthy ok +scheduler Healthy ok +etcd-2 Healthy {"health": "true"} +etcd-0 Healthy {"health": "true"} +etcd-1 Healthy {"health": "true"} ``` List the nodes in the remote Kubernetes cluster: -``` +```shell kubectl get nodes ``` > output -``` +```shell NAME STATUS AGE VERSION -worker-0 Ready 7m v1.7.4 -worker-1 Ready 4m v1.7.4 -worker-2 Ready 1m v1.7.4 +worker-0 Ready 7m v1.7.5 +worker-1 Ready 4m v1.7.5 +worker-2 Ready 1m v1.7.5 ``` Next: [Provisioning Pod Network Routes](11-pod-network-routes.md) diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index d1b6146f2..aa1b90f37 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,16 +12,17 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: -``` +```shell for instance in worker-0 worker-1 worker-2; do - gcloud compute instances describe ${instance} \ - --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' + PRIVATE_IP_ADDRESS=$(az vm show -d -g kubernetes -n ${instance} --query "privateIps" -otsv) + POD_CIDR=$(az vm show -g kubernetes --name worker-0 --query "tags" -o tsv) + echo $PRIVATE_IP_ADDRESS $POD_CIDR done ``` > output -``` +```shell 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24 @@ -29,32 +30,44 @@ done ## Routes -Create network routes for each worker instance: +Create network routes for worker instance: + +```shell +az network route-table create -g kubernetes -n kubernetes-routes +``` +```shell +az network vnet subnet update -g kubernetes \ + -n kubernetes-subnet \ + --vnet-name kubernetes-vnet \ + --route-table kubernetes-routes ``` + +```shell for i in 0 1 2; do - gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ - --network kubernetes-the-hard-way \ - --next-hop-address 10.240.0.2${i} \ - --destination-range 10.200.${i}.0/24 +az network route-table route create -g kubernetes \ + -n kubernetes-route-10-200-${i}-0-24 \ + --route-table-name kubernetes-routes \ + --address-prefix 10.200.${i}.0/24 \ + --next-hop-ip-address 10.240.0.2${i} \ + --next-hop-type VirtualAppliance done ``` -List the routes in the `kubernetes-the-hard-way` VPC network: +List the routes in the `kubernetes-vnet` VPC network: -``` -gcloud compute routes list --filter "network kubernetes-the-hard-way" +```shell +az network route-table route list -g kubernetes --route-table-name kubernetes-routes -o table ``` > output -``` -NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY -default-route-77bcc6bee33b5535 kubernetes-the-hard-way 10.240.0.0/24 1000 -default-route-b11fc914b626974d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 -kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 -kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 -kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 +```shell +AddressPrefix Name NextHopIpAddress NextHopType ProvisioningState ResourceGroup +--------------- ------------------------------ ------------------ ---------------- ------------------- --------------- +10.200.0.0/24 kubernetes-route-10-200-0-0-24 10.240.0.20 VirtualAppliance Succeeded kubernetes +10.200.1.0/24 kubernetes-route-10-200-1-0-24 10.240.0.21 VirtualAppliance Succeeded kubernetes +10.200.2.0/24 kubernetes-route-10-200-2-0-24 10.240.0.22 VirtualAppliance Succeeded kubernetes ``` Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index b7ad32a7a..c41978e7c 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts Deploy the `kube-dns` cluster add-on: -``` +```shell kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml ``` > output -``` +```shell serviceaccount "kube-dns" created configmap "kube-dns" created service "kube-dns" created @@ -21,13 +21,13 @@ deployment "kube-dns" created List the pods created by the `kube-dns` deployment: -``` +```shell kubectl get pods -l k8s-app=kube-dns -n kube-system ``` > output -``` +```shell NAME READY STATUS RESTARTS AGE kube-dns-3097350089-gq015 3/3 Running 0 20s kube-dns-3097350089-q64qc 3/3 Running 0 20s @@ -37,38 +37,38 @@ kube-dns-3097350089-q64qc 3/3 Running 0 20s Create a `busybox` deployment: -``` +```shell kubectl run busybox --image=busybox --command -- sleep 3600 ``` List the pod created by the `busybox` deployment: -``` +```shell kubectl get pods -l run=busybox ``` > output -``` +```shell NAME READY STATUS RESTARTS AGE busybox-2125412808-mt2vb 1/1 Running 0 15s ``` Retrieve the full name of the `busybox` pod: -``` +```shell POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") ``` Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: -``` +```shell kubectl exec -ti $POD_NAME -- nslookup kubernetes ``` > output -``` +```shell Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index 7b7878728..267bc1687 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,37 +8,41 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: -``` +```shell kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: -``` -gcloud compute ssh controller-0 \ - --command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" +```shell +CONTROLLER="controller-0" +PUBLIC_IP_ADDRESS=$(az network public-ip show -g kubernetes \ + -n ${CONTROLLER}-pip --query "ipAddress" -otsv) + +ssh $(whoami)@${PUBLIC_IP_ADDRESS} \ + "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" ``` > output -``` +```shell 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc| -00000040 3a 76 31 3a 6b 65 79 31 3a 70 88 d8 52 83 b7 96 |:v1:key1:p..R...| -00000050 04 a3 bd 7e 42 9e 8a 77 2f 97 24 a7 68 3f c5 ec |...~B..w/.$.h?..| -00000060 9e f7 66 e8 a3 81 fc c8 3c df 63 71 33 0a 87 8f |..f.....<.cq3...| -00000070 0e c7 0a 0a f2 04 46 85 33 92 9a 4b 61 b2 10 c0 |......F.3..Ka...| -00000080 0b 00 05 dd c3 c2 d0 6b ff ff f2 32 3b e0 ec a0 |.......k...2;...| -00000090 63 d3 8b 1c 29 84 88 71 a7 88 e2 26 4b 65 95 14 |c...)..q...&Ke..| -000000a0 dc 8d 59 63 11 e5 f3 4e b4 94 cc 3d 75 52 c7 07 |..Yc...N...=uR..| -000000b0 73 f5 b4 b0 63 aa f9 9d 29 f8 d6 88 aa 33 c4 24 |s...c...)....3.$| -000000c0 ac c6 71 2b 45 98 9e 5f c6 a4 9d a2 26 3c 24 41 |..q+E.._....&<$A| -000000d0 95 5b d3 2c 4b 1e 4a 47 c8 47 c8 f3 ac d6 e8 cb |.[.,K.JG.G......| -000000e0 5f a9 09 93 91 d7 5d c9 c2 68 f8 cf 3c 7e 3b a3 |_.....]..h..<~;.| -000000f0 db d8 d5 9e 0c bf 2a 2f 58 0a |......*/X.| +00000040 3a 76 31 3a 6b 65 79 31 3a 65 c3 db a8 fb ae 9b |:v1:key1:e......| +00000050 f9 09 59 0b 12 fa 4f 5d 4c 6c c5 35 28 d8 72 08 |..Y...O]Ll.5(.r.| +00000060 f7 9e 4b 0a 6e 1d 6b 27 8f d2 7f 36 2b 11 6b 61 |..K.n.k'...6+.ka| +00000070 53 6a a7 24 56 e2 19 ee e7 04 94 ee b3 9c d3 c3 |Sj.$V...........| +00000080 68 b5 b8 51 8b 01 4e d9 f0 ce 40 9a 73 5c 10 28 |h..Q..N...@.s\.(| +00000090 18 bc ff 3a 51 4d bc 0c 6d 27 97 5c c6 bd a2 35 |...:QM..m'.\...5| +000000a0 88 18 56 16 c7 10 12 a1 e2 cf c5 62 6c 50 7e 67 |..V........blP~g| +000000b0 89 0c 42 56 73 69 48 bf 24 5e 91 91 56 2d 64 2f |..BVsiH.$^..V-d/| +000000c0 3a 35 b9 c9 08 41 d6 95 62 e8 1b 35 80 c9 8e 74 |:5...A..b..5...t| +000000d0 79 34 bc 5b 7c 68 cd 0c bc 11 21 c0 48 bc 92 a6 |y4.[|h....!.H...| +000000e0 2f b5 ef 18 5c f1 00 16 19 22 e8 9c c1 8c 3c 35 |/...\...."....<5| +000000f0 fa b3 87 51 85 bf f0 cd 0e 0a |...Q......| 000000fa ``` @@ -50,19 +54,19 @@ In this section you will verify the ability to create and manage [Deployments](h Create a deployment for the [nginx](https://nginx.org/en/) web server: -``` +```shell kubectl run nginx --image=nginx ``` List the pod created by the `nginx` deployment: -``` +```shell kubectl get pods -l run=nginx ``` > output -``` +```shell NAME READY STATUS RESTARTS AGE nginx-4217019353-b5gzn 1/1 Running 0 15s ``` @@ -73,46 +77,46 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: -``` +```shell POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") ``` Forward port `8080` on your local machine to port `80` of the `nginx` pod: -``` +```shell kubectl port-forward $POD_NAME 8080:80 ``` > output -``` +```shell Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` In a new terminal make an HTTP request using the forwarding address: -``` +```shell curl --head http://127.0.0.1:8080 ``` > output -``` +```shell HTTP/1.1 200 OK -Server: nginx/1.13.3 -Date: Thu, 31 Aug 2017 01:58:15 GMT +Server: nginx/1.13.5 +Date: Fri, 08 Sep 2017 20:33:16 GMT Content-Type: text/html Content-Length: 612 -Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT +Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT Connection: keep-alive -ETag: "5964cd3f-264" +ETag: "5989d7cc-264" Accept-Ranges: bytes ``` Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: -``` +```shell Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 @@ -125,14 +129,14 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +```shell kubectl logs $POD_NAME ``` > output -``` -127.0.0.1 - - [31/Aug/2017:01:58:15 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-" +```shell +127.0.0.1 - - [08/Sep/2017:20:33:16 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-" ``` ### Exec @@ -141,14 +145,14 @@ In this section you will verify the ability to [execute commands in a container] Print the nginx version by executing the `nginx -v` command in the `nginx` container: -``` +```shell kubectl exec -ti $POD_NAME -- nginx -v ``` > output -``` -nginx version: nginx/1.13.3 +```shell +nginx version: nginx/1.13.5 ``` ## Services @@ -157,7 +161,7 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +```shell kubectl expose deployment nginx --port 80 --type NodePort ``` @@ -165,43 +169,51 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: -``` +```shell NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` Create a firewall rule that allows remote access to the `nginx` node port: -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ - --allow=tcp:${NODE_PORT} \ - --network kubernetes-the-hard-way +```shell +az network nsg rule create -g kubernetes \ + -n kubernetes-allow-nginx \ + --access allow \ + --destination-address-prefix '*' \ + --destination-port-range ${NODE_PORT} \ + --direction inbound \ + --nsg-name kubernetes-nsg \ + --protocol tcp \ + --source-address-prefix '*' \ + --source-port-range '*' \ + --priority 1002 ``` Retrieve the external IP address of a worker instance: -``` -EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ - --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') +```shell +EXTERNAL_IP=$(az network public-ip show -g kubernetes \ + -n worker-0-pip --query "ipAddress" -otsv) ``` Make an HTTP request using the external IP address and the `nginx` node port: -``` +```shell curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` > output -``` +```shell HTTP/1.1 200 OK -Server: nginx/1.13.3 -Date: Thu, 31 Aug 2017 02:00:21 GMT +Server: nginx/1.13.5 +Date: Fri, 08 Sep 2017 20:38:44 GMT Content-Type: text/html Content-Length: 612 -Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT +Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT Connection: keep-alive -ETag: "5964cd3f-264" +ETag: "5989d7cc-264" Accept-Ranges: bytes ``` diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index 2f18a9b4b..83b5d16f7 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -1,67 +1,7 @@ # Cleaning Up -In this labs you will delete the compute resources created during this tutorial. +The following command will delete the `kubernetes` resource group and all related resources created during this tutorial. -## Compute Instances - -Delete the controller and worker compute instances: - -``` -gcloud -q compute instances delete \ - controller-0 controller-1 controller-2 \ - worker-0 worker-1 worker-2 -``` - -## Networking - -Delete the external load balancer network resources: - -``` -gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ - --region $(gcloud config get-value compute/region) -``` - -``` -gcloud -q compute target-pools delete kubernetes-target-pool -``` - -``` -gcloud -q compute http-health-checks delete kube-apiserver-health-check -``` - -Delete the `kubernetes-the-hard-way` static IP address: - -``` -gcloud -q compute addresses delete kubernetes-the-hard-way -``` - -Delete the `kubernetes-the-hard-way` firewall rules: - -``` -gcloud -q compute firewall-rules delete \ - kubernetes-the-hard-way-allow-nginx-service \ - kubernetes-the-hard-way-allow-internal \ - kubernetes-the-hard-way-allow-external \ - kubernetes-the-hard-way-allow-health-checks -``` - -Delete the Pod network routes: - -``` -gcloud -q compute routes delete \ - kubernetes-route-10-200-0-0-24 \ - kubernetes-route-10-200-1-0-24 \ - kubernetes-route-10-200-2-0-24 -``` - -Delete the `kubernetes` subnet: - -``` -gcloud -q compute networks subnets delete kubernetes -``` - -Delete the `kubernetes-the-hard-way` network VPC: - -``` -gcloud -q compute networks delete kubernetes-the-hard-way -``` +```shell +az group delete --name kubernetes --yes --no-wait +``` \ No newline at end of file