Skip to content

Latest commit

 

History

History
161 lines (113 loc) · 10 KB

deploy.md

File metadata and controls

161 lines (113 loc) · 10 KB

Deploy an OpenShift Cluster

Install Pre-requisites

All the commands in this guide require both the Azure CLI and acs-engine. Follow the installation instructions to download acs-engine before continuing or compile it from source.

To install the Azure CLI, follow the official documentation for your operating system.

Overview

acs-engine reads a cluster definition (or api model) which describes the size, shape, and configuration of your cluster. This guide follows the default configuration of one master and two Linux nodes, where one node is used by the OpenShift internal infrastructure, and the other one is for end-user workloads (compute node). At least one of each node type is required for a working OpenShift cluster. In the openshift.json file, one agent pool specifies the number of infrastructure node(s); another is used to specify the number of compute node(s). If you would like to change these numbers, edit examples/openshift.json before continuing.

The acs-engine deploy command automates creation of a Service Principal, Resource Group and SSH key for your cluster. If operators need more control or are interested in the individual steps see the "Long Way" section below.

Preparing for the Deployment

In order to deploy OpenShift, you will need the following:

  • The subscription and tenant ID in which you would like to provision the cluster. Both uuids can be found with az account list -o json, under the id and tenantId fields.
  • Proper access rights within the subscription. Especially the right to create and assign service principals to applications (see ACS Engine the Long Way, Step #2).
  • A dnsPrefix which forms part of the the hostname for your cluster (e.g. staging, prodwest, blueberry). The DNS prefix must be unique in the given geographical location, so pick a random name.
  • A location to provision the cluster e.g. eastus.
$ az account list -o json
[
  {
    "cloudName": "AzureCloud",
    "id": "5eca53b6-18b4-4d9b-a4d4-a45a1ff367c8",
    "isDefault": true,
    "name": "Acme Corp",
    "state": "Enabled",
    "tenantId": "5a27b61b-1b6e-4be5-aa9d-0d5696076bb9",
    "user": {
      "name": "a2ada0c0-1a4d-4923-848f-c04b8b301c13",
      "type": "servicePrincipal"
    }
  }
]

Deploy

For this example, the subscription id is 5eca53b6-18b4-4d9b-a4d4-a45a1ff367c8, the tenant id is 5a27b61b-1b6e-4be5-aa9d-0d5696076bb9, the DNS prefix and a resource group is openshift-red, and location is eastus.

Before running the acs-engine deploy command, you must fill in all missing fields in the examples/openshift.json file. See the "Long Way" section below for the description of required values.

Now you can run acs-engine deploy with the appropriate arguments:

$ acs-engine deploy --subscription-id 5eca53b6-18b4-4d9b-a4d4-a45a1ff367c8 \
    --resource-group openshift-red --location eastus \
    --api-model examples/openshift.json

INFO[0034] Starting ARM Deployment (openshift-red-1843927849). This will take some time...
INFO[0393] Finished ARM Deployment (openshift-red-1843927849).

As well as deploying the cluster, acs-engine will output Azure Resource Manager (ARM) templates, SSH keys (only if generated by acs-engine) and a node configuration in _output/openshift-red directory:

  • _output/openshift-red/apimodel.json
  • _output/openshift-red/azuredeploy.json
  • _output/openshift-red/azuredeploy.parameters.json
  • _output/openshift-red/master.tar.gz
  • _output/openshift-red/node.tar.gz

Administrative note: By default, the directory where acs-engine stores cluster configuration (_output/openshift-red above) won't be overwritten as a result of subsequent attempts to deploy a cluster using the same --dns-prefix) To re-use the same resource group name repeatedly, include the --force-overwrite command line option with your acs-engine deploy command.

Bonus tip: include an --auto-suffix option to append a randomly generated suffix to the dns-prefix to form the resource group name, for example if your workflow requires a common prefix across multiple cluster deployments. Using the --auto-suffix pattern appends a compressed timestamp to ensure a unique cluster name (and thus ensure that each deployment's configuration artifacts will be stored locally under a discrete _output/<resource-group-name>/ directory).

After couple of minutes, your OpenShift web console should be accessible at https://${dnsprefix}.${location}.cloudapp.azure.com:8443/. You can log in with clusterUsername and clusterPassword values you set in the openshift.json file or with your Azure account if you've used AAD integration.

For next steps, see getting started documentation on OpenShift website.

ACS Engine the Long Way

Step 1: Generate an SSH Key

In addition to using OpenShift web console, CLI and API to interact with the clusters, cluster operators may access the master machine using SSH.

If you don't have an SSH key cluster operators may generate a new one.

Step 2: Create a Service Principal

The OpenShift cluster needs a Service Principal to interact with Azure Resource Manager (ARM), for example to dynamically create Azure persistent storage volumes. Follow the instructions to create a new service principal.

Step 3: Edit your Cluster Definition

ACS Engine consumes a cluster definition which outlines the desired shape, size, and configuration of OpenShift. There are a number of features that can be enabled through the cluster definition: check the examples directory for a number of examples.

Edit the simple OpenShift cluster definition and fill out the required values (every value with empty default "" must be filled in):

  • openShiftConfig: contains authentication info for OpenShift. You must provide clusterUsername and clusterPassword for htpasswd authentication. Optionally you can set the enableAADAuthentication to true to enable AAD integration between OpenShift and Azure. AAD Authentication is preferred and more secure option, but requires more prerequisites and privileges to enable
  • azProfile: Azure account information. subscriptionId and tenantId can be obtained from $ az account list -o json command. resourceGroup and location should be filled in based on your preference
  • masterProfile:
    • count: currently only one master is supported
    • dnsPrefix: must be a region-unique name and will form part of the hostname (e.g. myprod1, staging, leapingllama)
    • vmSize: must be at least Standard_D4s_v3
  • agentPoolProfiles: contains specification for infrastructure and compute nodes
    • vmSize: must be at least Standard_D4s_v3
  • linuxProfile: keyData must contain the public portion of an SSH key - this will be associated with the adminUsername system user (e.g. 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABA....')
  • clientId: this is the service principal's appId uuid or name from step 2. Leave blank if using AAD integration
  • secret: this is the service principal's password or randomly-generated password from step 2. Leave blank if using AAD integration

Step 4: Generate the Templates

The generate command takes a cluster definition and outputs a template and parameters file which describes your OpenShift cluster. By default, generate will create a new directory named after your cluster nested in the _output directory. If my dnsPrefix was openshift-red my cluster templates would be found in _output/openshift-red-.

Run acs-engine generate examples/openshift.json

Step 5: Submit your Templates to Azure Resource Manager (ARM)

Deploy the output azuredeploy.json and azuredeploy.parameters.json

Step 6: Check the OpenShift cluster

After couple of minutes, your OpenShift web console should be accessible at https://${dnsprefix}.${location}.cloudapp.azure.com:8443/. You can log in with clusterUsername and clusterPassword values you set in the openshift.json file or with your Azure account if you've used AAD integration.

For next steps, see getting started documentation on OpenShift website.

Custom VNET

ACS Engine supports deploying into an existing VNET. Operators must specify the ARM path/id of Subnets for the masterProfile and any agentPoolProfiles, as well as the master IP address in firstConsecutiveStaticIP. Note: Currently OpenShift clusters cannot be set up in the 172.30.0.0/16 range.

To create a vnet and a subnet, for example:

az network vnet create -g $RESOURCE_GROUP -n $VNET_NAME --address-prefixes 10.239.0.0/16 --subnet-name $SUBNET_NAME --subnet-prefix 10.239.0.0/24

To get the vnetSubnetId:

az network vnet subnet show -n $SUBNET_NAME -g $RESOURCE_GROUP --vnet-name $VNET_NAME --query id

Edit the OpenShift with custom vnet cluster definition and fill out the required values (every value with empty default "" must be filled in).

Before provisioning, modify the masterProfile and agentPoolProfiles to match the above requirements, with the below being a representative example:

"masterProfile": {
  ...
  "vnetSubnetId": "/subscriptions/SUB_ID/resourceGroups/RG_NAME/providers/Microsoft.Network/virtualNetworks/VNET_NAME/subnets/SUBNET_NAME",
  "firstConsecutiveStaticIP": "10.239.0.239",
  ...
},
...
"agentPoolProfiles": [
  {
    ...
    "name": "compute",
    "vnetSubnetId": "/subscriptions/SUB_ID/resourceGroups/RG_NAME/providers/Microsoft.Network/virtualNetworks/VNET_NAME/subnets/SUBNET_NAME",
    ...
  },
  {
    ...
    "name": "infra",
    "vnetSubnetId": "/subscriptions/SUB_ID/resourceGroups/RG_NAME/providers/Microsoft.Network/virtualNetworks/VNET_NAME/subnets/SUBNET_NAME",
    ...
  },