Here are the cluster definitions for apiVersion "vlabs":
Name | Required | Description |
---|---|---|
apiVersion | yes | The version of the template. For "vlabs" the value is "vlabs" |
orchestratorProfile
describes the orchestrator settings.
Name | Required | Description |
---|---|---|
orchestratorType | yes | Specifies the orchestrator type for the cluster |
Here are the valid values for the orchestrator types:
DCOS
- this represents the DC/OS orchestrator. Older releases of DCOS 1.8 may be specified.Kubernetes
- this represents the Kubernetes orchestrator.Swarm
- this represents the Swarm orchestrator.Swarm Mode
- this represents the Swarm Mode orchestrator.
kubernetesConfig
describes Kubernetes specific configuration.
Name | Required | Description |
---|---|---|
kubernetesImageBase | no | Specifies the base URL (everything preceding the actual image filename) of the kubernetes hyperkube image to use for cluster deployment, e.g., k8s-gcrio.azureedge.net/ |
dockerEngineVersion | no | Which version of docker-engine to use in your cluster, e.g. "17.03.*" |
networkPolicy | no | Specifies the network policy tool for the cluster. Valid values are:"azure" (default), which provides an Azure native networking experience,none for not enforcing any network policy,calico for Calico network policy (required for Kubernetes network policies; clusters with Linux agents only).cilium for cilium network policy (required for Kubernetes network policies; clusters with Linux agents only).See network policy examples for more information |
containerRuntime | no | The container runtime to use as a backend. The default is docker . The only other option is clear-containers |
clusterSubnet | no | The IP subnet used for allocating IP addresses for pod network interfaces. The subnet must be in the VNET address space. Default value is 10.244.0.0/16 |
dnsServiceIP | no | IP address for kube-dns to listen on. If specified must be in the range of serviceCidr |
dockerBridgeSubnet | no | The specific IP and subnet used for allocating IP addresses for the docker bridge network created on the kubernetes master and agents. Default value is 172.17.0.1/16. This value is used to configure the docker daemon using the --bip flag |
serviceCidr | no | IP range for Service IPs, Default is "10.0.0.0/16". This range is never routed outside of a node so does not need to lie within clusterSubnet or the VNET |
enableRbac | no | Enable Kubernetes RBAC (boolean - default == true) |
enableAggregatedAPIs | no | Enable Kubernetes Aggregated APIs.This is required by Service Catalog. (boolean - default == false) |
enableDataEncryptionAtRest | no | Enable kubernetes data encryption at rest.This is currently an alpha feature. (boolean - default == false) |
enablePodSecurityPolicy | no | Enable kubernetes pod security policy.This is currently a beta feature. (boolean - default == false) |
enableEncryptionWithExternalKms | no | Enable kubernetes data encryption at rest with external KMS.This is currently an alpha feature. (boolean - default == false) |
etcdDiskSizeGB | no | Size in GB to assign to etcd data volume. Defaults (if no user value provided) are: 256 GB for clusters up to 3 nodes; 512 GB for clusters with between 4 and 10 nodes; 1024 GB for clusters with between 11 and 20 nodes; and 2048 GB for clusters with more than 20 nodes |
privateCluster | no | Build a cluster without public addresses assigned. See privateClusters below. |
gcHighThreshold | no | Sets the --image-gc-high-threshold value on the kublet configuration. Default is 85. See kubelet Garbage Collection |
gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. See kubelet Garbage Collection |
useInstanceMetadata | no | Use the Azure cloudprovider instance metadata service for appropriate resource discovery operations. Default is true |
addons | no | Configure various Kubernetes addons configuration (currently supported: tiller, kubernetes-dashboard). See addons configuration below |
kubeletConfig | no | Configure various runtime configuration for kubelet. See kubeletConfig below |
controllerManagerConfig | no | Configure various runtime configuration for controller-manager. See controllerManagerConfig below |
cloudControllerManagerConfig | no | Configure various runtime configuration for cloud-controller-manager. See cloudControllerManagerConfig below |
apiServerConfig | no | Configure various runtime configuration for apiserver. See apiServerConfig below |
schedulerConfig | no | Configure various runtime configuration for scheduler. See schedulerConfig below |
addons
describes various addons configuration. It is a child property of kubernetesConfig
. Below is a list of currently available addons:
Name of addon | Enabled by default? | How many containers | Description |
---|---|---|---|
tiller | true | 1 | Delivers the Helm server-side component: tiller. See https://github.com/kubernetes/helm for more info |
kubernetes-dashboard | true | 1 | Delivers the kubernetes dashboard component. See https://github.com/kubernetes/dashboard for more info |
rescheduler | false | 1 | Delivers the kubernetes rescheduler component |
To give a bit more info on the addons
property: We've tried to expose the basic bits of data that allow useful configuration of these cluster features. Here are some example usage patterns that will unpack what addons
provide:
To enable an addon (using "tiller" as an example):
"kubernetesConfig": {
"addons": [
{
"name": "tiller",
"enabled" : true
}
]
}
As you can see above, addons
is an array child property of kubernetesConfig
. Each addon that you want to add custom configuration to would be represented as an object item in the array. For example, to disable both tiller and dashboard:
"kubernetesConfig": {
"addons": [
{
"name": "tiller",
"enabled" : false
},
{
"name": "kubernetes-dashboard",
"enabled" : false
}
]
}
More usefully, let's add some custom configuration to both of the above addons:
"kubernetesConfig": {
"addons": [
{
"name": "tiller",
"containers": [
{
"name": "tiller",
"image": "myDockerHubUser/tiller:v3.0.0-alpha",
"cpuRequests": "1",
"memoryRequests": "1024Mi",
"cpuLimits": "1",
"memoryLimits": "1024Mi"
}
]
},
{
"name": "kubernetes-dashboard",
"containers": [
{
"name": "kubernetes-dashboard",
"cpuRequests": "50m",
"memoryRequests": "512Mi",
"cpuLimits": "50m",
"memoryLimits": "512Mi"
}
]
}
]
}
Above you see custom configuration for both tiller and kubernetes-dashboard. Both include specific resource limit values across the following dimensions:
- cpuRequests
- memoryRequests
- cpuLimits
- memoryLimits
See https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ for more on Kubernetes resource limits.
Additionally above, we specified a custom docker image for tiller, let's say we want to build a cluster and test an alpha version of tiller in it.
Finally, the addons.enabled
boolean property was omitted above; that's by design. If you specify a containers
configuration, acs-engine assumes you're enabling the addon. The very first example above demonstrates a simple "enable this addon with default configuration" declaration.
kubeletConfig
declares runtime configuration for the kubelet running on all master and agent nodes. It is a generic key/value object, and a child property of kubernetesConfig
. An example custom kubelet config:
"kubernetesConfig": {
"kubeletConfig": {
"--eviction-hard": "memory.available<250Mi,nodefs.available<20%,nodefs.inodesFree<10%"
}
}
See here for a reference of supported kubelet options.
Below is a list of kubelet options that acs-engine will configure by default:
kubelet option | default value |
---|---|
"--cloud-config" | "/etc/kubernetes/azure.json" |
"--cloud-provider" | "azure" |
"--cluster-domain" | "cluster.local" |
"--pod-infra-container-image" | "pause-amd64:version" |
"--max-pods" | "30", or "100" if using kubenet --network-plugin (i.e., "networkPolicy": "none" ) |
"--eviction-hard" | "memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%" |
"--node-status-update-frequency" | "10s" |
"--image-gc-high-threshold" | "85" |
"--image-gc-low-threshold" | "850" |
"--non-masquerade-cidr" | "10.0.0.0/8" |
"--azure-container-registry-config" | "/etc/kubernetes/azure.json" |
"--feature-gates" | No default (can be a comma-separated list). On agent nodes Accelerators=true will be applied in the --feature-gates option |
Below is a list of kubelet options that are not currently user-configurable, either because a higher order configuration vector is available that enforces kubelet configuration, or because a static configuration is required to build a functional cluster:
kubelet option | default value |
---|---|
"--address" | "0.0.0.0" |
"--allow-privileged" | "true" |
"--pod-manifest-path" | "/etc/kubernetes/manifests" |
"--network-plugin" | "cni" |
"--node-labels" | (based on Azure node metadata) |
"--cgroups-per-qos" | "true" |
"--enforce-node-allocatable" | "pods" |
"--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
"--register-node" (master nodes only) | "true" |
"--register-with-taints" (master nodes only) | "node-role.kubernetes.io/master=true:NoSchedule" |
"--keep-terminated-pod-volumes" | "false" |
controllerManagerConfig
declares runtime configuration for the kube-controller-manager daemon running on all master nodes. Like kubeletConfig
it is a generic key/value object, and a child property of kubernetesConfig
. An example custom controller-manager config:
"kubernetesConfig": {
"controllerManagerConfig": {
"--node-monitor-grace-period": "40s",
"--pod-eviction-timeout": "5m0s",
"--route-reconciliation-period": "10s"
"--terminated-pod-gc-threshold": "5000"
}
}
See here for a reference of supported controller-manager options.
Below is a list of controller-manager options that acs-engine will configure by default:
controller-manager option | default value |
---|---|
"--node-monitor-grace-period" | "40s" |
"--pod-eviction-timeout" | "5m0s" |
"--route-reconciliation-period" | "10s" |
"--terminated-pod-gc-threshold" | "5000" |
"--feature-gates" | No default (can be a comma-separated list) |
Below is a list of controller-manager options that are not currently user-configurable, either because a higher order configuration vector is available that enforces controller-manager configuration, or because a static configuration is required to build a functional cluster:
controller-manager option | default value |
---|---|
"--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
"--allocate-node-cidrs" | "false" |
"--cluster-cidr" | "10.240.0.0/12" |
"--cluster-name" | auto-generated using api model properties |
"--cloud-provider" | "azure" |
"--cloud-config" | "/etc/kubernetes/azure.json" |
"--root-ca-file" | "/etc/kubernetes/certs/ca.crt" |
"--cluster-signing-cert-file" | "/etc/kubernetes/certs/ca.crt" |
"--cluster-signing-key-file" | "/etc/kubernetes/certs/ca.key" |
"--service-account-private-key-file" | "/etc/kubernetes/certs/apiserver.key" |
"--leader-elect" | "true" |
"--v" | "2" |
"--profiling" | "false" |
"--use-service-account-credentials" | "false" ("true" if kubernetesConfig.enableRbac is true) |
cloudControllerManagerConfig
declares runtime configuration for the cloud-controller-manager daemon running on all master nodes in a Cloud Controller Manager configuration. Like kubeletConfig
it is a generic key/value object, and a child property of kubernetesConfig
. An example custom cloud-controller-manager config:
"kubernetesConfig": {
"cloudControllerManagerConfig": {
"--route-reconciliation-period": "1m"
}
}
See here for a reference of supported controller-manager options.
Below is a list of cloud-controller-manager options that acs-engine will configure by default:
controller-manager option | default value |
---|---|
"--route-reconciliation-period" | "10s" |
Below is a list of cloud-controller-manager options that are not currently user-configurable, either because a higher order configuration vector is available that enforces controller-manager configuration, or because a static configuration is required to build a functional cluster:
controller-manager option | default value |
---|---|
"--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
"--allocate-node-cidrs" | "false" |
"--cluster-cidr" | "10.240.0.0/12" |
"--cluster-name" | auto-generated using api model properties |
"--cloud-provider" | "azure" |
"--cloud-config" | "/etc/kubernetes/azure.json" |
"--leader-elect" | "true" |
"--v" | "2" |
apiServerConfig
declares runtime configuration for the kube-apiserver daemon running on all master nodes. Like kubeletConfig
and controllerManagerConfig
it is a generic key/value object, and a child property of kubernetesConfig
. An example custom apiserver config:
"kubernetesConfig": {
"apiServerConfig": {
"--request-timeout": "30s"
}
}
Or perhaps you want to customize/override the set of admission-control flags passed to the API Server by default, you can omit the options you don't want and specify only the ones you need as follows:
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"orchestratorRelease": "1.8",
"kubernetesConfig": {
"apiServerConfig": {
"--admission-control": "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages"
}
}
}
See here for a reference of supported apiserver options.
Below is a list of apiserver options that acs-engine will configure by default:
apiserver option | default value |
---|---|
"--admission-control" | "NamespaceLifecycle, LimitRanger, ServiceAccount, DefaultStorageClass, ResourceQuota, DenyEscalatingExec, AlwaysPullImages" |
"--authorization-mode" | "Node", "RBAC" (the latter if enabledRbac is true) |
"--audit-log-maxage" | "30" |
"--audit-log-maxbackup" | "10" |
"--audit-log-maxsize" | "100" |
"--feature-gates" | No default (can be a comma-separated list) |
"--oidc-username-claim" | "oid" (if has AADProfile) |
"--oidc-groups-claim" | "groups" (if has AADProfile) |
"--oidc-client-id" | calculated value that represents OID client ID (if has AADProfile) |
"--oidc-issuer-url" | calculated value that represents OID issuer URL (if has AADProfile) |
Below is a list of apiserver options that are not currently user-configurable, either because a higher order configuration vector is available that enforces apiserver configuration, or because a static configuration is required to build a functional cluster:
apiserver option | default value |
---|---|
"--bind-address" | "0.0.0.0" |
"--advertise-address" | calculated value that represents listening URI for API server |
"--allow-privileged" | "true" |
"--anonymous-auth" | "false |
"--audit-log-path" | "/var/log/apiserver/audit.log" |
"--insecure-port" | "8080" |
"--secure-port" | "443" |
"--service-account-lookup" | "true" |
"--etcd-cafile" | "/etc/kubernetes/certs/ca.crt" |
"--etcd-certfile" | "/etc/kubernetes/certs/etcdclient.crt" |
"--etcd-keyfile" | "/etc/kubernetes/certs/etcdclient.key" |
"--etcd-servers" | calculated value that represents etcd servers |
"--profiling" | "false" |
"--repair-malformed-updates" | "false" |
"--tls-cert-file" | "/etc/kubernetes/certs/apiserver.crt" |
"--tls-private-key-file" | "/etc/kubernetes/certs/apiserver.key" |
"--client-ca-file" | "/etc/kubernetes/certs/ca.crt" |
"--service-account-key-file" | "/etc/kubernetes/certs/apiserver.key" |
"--kubelet-client-certificate" | "/etc/kubernetes/certs/client.crt" |
"--kubelet-client-key" | "/etc/kubernetes/certs/client.key" |
"--service-cluster-ip-range" | see serviceCIDR |
"--storage-backend" | calculated value that represents etcd version |
"--v" | "4" |
"--experimental-encryption-provider-config" | "/etc/kubernetes/encryption-config.yaml" (if enableDataEncryptionAtRest is true) |
"--experimental-encryption-provider-config" | "/etc/kubernetes/encryption-config.yaml" (if enableEncryptionWithExternalKms is true) |
"--requestheader-client-ca-file" | "/etc/kubernetes/certs/proxy-ca.crt" (if enableAggregatedAPIs is true) |
"--proxy-client-cert-file" | "/etc/kubernetes/certs/proxy.crt" (if enableAggregatedAPIs is true) |
"--proxy-client-key-file" | "/etc/kubernetes/certs/proxy.key" (if enableAggregatedAPIs is true) |
"--requestheader-allowed-names" | "" (if enableAggregatedAPIs is true) |
"--requestheader-extra-headers-prefix" | "X-Remote-Extra-" (if enableAggregatedAPIs is true) |
"--requestheader-group-headers" | "X-Remote-Group" (if enableAggregatedAPIs is true) |
"--requestheader-username-headers" | "X-Remote-User" (if enableAggregatedAPIs is true) |
"--cloud-provider" | "azure" (unless useCloudControllerManager is true) |
"--cloud-config" | "/etc/kubernetes/azure.json" (unless useCloudControllerManager is true) |
schedulerConfig
declares runtime configuration for the kube-scheduler daemon running on all master nodes. Like kubeletConfig
, controllerManagerConfig
, and apiServerConfig
it is a generic key/value object, and a child property of kubernetesConfig
. An example custom apiserver config:
"kubernetesConfig": {
"schedulerConfig": {
"--v": "2"
}
}
See here for a reference of supported kube-scheduler options.
Below is a list of scheduler options that acs-engine will configure by default:
kube-scheduler option | default value |
---|---|
"--v" | "2" |
"--feature-gates" | No default (can be a comma-separated list) |
Below is a list of kube-scheduler options that are not currently user-configurable, either because a higher order configuration vector is available that enforces kube-scheduler configuration, or because a static configuration is required to build a functional cluster:
kube-scheduler option | default value |
---|---|
"--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
"--leader-elect" | "true" |
"--profiling" | "false" |
We consider kubeletConfig
, controllerManagerConfig
, apiServerConfig
, and schedulerConfig
to be generic conveniences that add power/flexibility to cluster deployments. Their usage comes with no operational guarantees! They are manual tuning features that enable low-level configuration of a kubernetes cluster.
privateCluster
defines a cluster without public addresses assigned. It is a child property of kubernetesConfig
.
Name | Required | Description |
---|---|---|
enabled | no | Enable Private Cluster (boolean - default == false) |
jumpboxProfile | no | Configure and auto-provision a jumpbox to access your private cluster. jumpboxProfile is ignored if enabled is false . See jumpboxProfile below |
jumpboxProfile
describes the settings for a jumpbox deployed via acs-engine to access a private cluster. It is a child property of privateCluster
.
Name | Required | Description |
---|---|---|
name | yes | This is the unique name for the jumpbox VM. Some resources deployed with the jumpbox are derived from this name |
vmSize | yes | Describes a valid Azure VM Sizes |
publicKey | yes | The public SSH key used for authenticating access to the jumpbox. Here are instructions for generating a public/private key pair |
osDiskSizeGB | no | Describes the OS Disk Size in GB. Defaults to 30 |
storageProfile | no | Specifies the storage profile to use. Valid values are StorageAccount or ManagedDisks. Defaults to StorageAccount |
username | no | Describes the admin username to be used on the jumpbox. Defaults to azureuser |
masterProfile
describes the settings for master configuration.
Name | Required | Description |
---|---|---|
count | yes | Masters have count value of 1, 3, or 5 masters |
dnsPrefix | yes | The dns prefix for the master FQDN. The master FQDN is used for SSH or commandline access. This must be a unique name. (bring your own VNET examples) |
subjectAltNames | no | An array of fully qualified domain names using which a user can reach API server. These domains are added as Subject Alternative Names to the generated API server certificate. NOTE: These domains will not be automatically provisioned. |
firstConsecutiveStaticIP | only required when vnetSubnetId specified | The IP address of the first master. IP Addresses will be assigned consecutively to additional master nodes |
vmsize | yes | Describes a valid Azure VM Sizes. These are restricted to machines with at least 2 cores and 100GB of ephemeral disk space |
osDiskSizeGB | no | Describes the OS Disk Size in GB |
vnetSubnetId | no | Specifies the Id of an alternate VNET subnet. The subnet id must specify a valid VNET ID owned by the same subscription. (bring your own VNET examples) |
extensions | no | This is an array of extensions. This indicates that the extension be run on a single master. The name in the extensions array must exactly match the extension name in the extensionProfiles |
vnetCidr | no | Specifies the VNET cidr when using a custom VNET (bring your own VNET examples) |
imageReference.name | no | The name of the Linux OS image. Needs to be used in conjunction with resourceGroup, below |
imageReference.resourceGroup | no | Resource group that contains the Linux OS image. Needs to be used in conjunction with name, above |
distro | no | Select Master(s) Operating System (Linux only). Currently supported values are: ubuntu and coreos (CoreOS support is currently experimental). Defaults to ubuntu if undefined. Currently supported OS and orchestrator configurations -- ubuntu : DCOS, Docker Swarm, Kubernetes; coreos : Kubernetes. Example of CoreOS Master with CoreOS Agents |
A cluster can have 0 to 12 agent pool profiles. Agent Pool Profiles are used for creating agents with different capabilities such as VMSizes, VMSS or Availability Set, Public/Private access, user-defined OS Images, attached storage disks, attached managed disks, or Windows.
Name | Required | Description |
---|---|---|
availabilityProfile | no | Supported values are VirtualMachineScaleSets (default) and AvailabilitySet . For Kubernetes clusters before k8s version 1.10, use AvailabilitySet . Otherwise, you should use VirtualMachineScaleSets , unless you need features such as dynamic attached disks |
count | yes | Describes the node count |
diskSizesGB | no | Describes an array of up to 4 attached disk sizes. Valid disk size values are between 1 and 1024 |
dnsPrefix | Required if agents are to be exposed publically with a load balancer | The dns prefix that forms the FQDN to access the loadbalancer for this agent pool. This must be a unique name among all agent pools. Not supported for Kubernetes clusters |
name | yes | This is the unique name for the agent pool profile. The resources of the agent pool profile are derived from this name |
ports | only required if needed for exposing services publically | Describes an array of ports need for exposing publically. A tcp probe is configured for each port and only opens to an agent node if the agent node is listening on that port. A maximum of 150 ports may be specified. Not supported for Kubernetes clusters |
storageProfile | no | Specifies the storage profile to use. Valid values are StorageAccount or ManagedDisks. Defaults to StorageAccount |
vmsize | yes | Describes a valid Azure VM Sizes. These are restricted to machines with at least 2 cores |
osDiskSizeGB | no | Describes the OS Disk Size in GB |
vnetSubnetId | no | Specifies the Id of an alternate VNET subnet. The subnet id must specify a valid VNET ID owned by the same subscription. (bring your own VNET examples) |
imageReference.name | no | The name of a a Linux OS image. Needs to be used in conjunction with resourceGroup, below |
imageReference.resourceGroup | no | Resource group that contains the Linux OS image. Needs to be used in conjunction with name, above |
distro | no | Specifies agent pool(s) Operating System (Linux). Supported values are ubuntu and coreos (CoreOS support is currently experimental). Defaults to ubuntu if undefined, unless osType is defined as Windows (in which case distro is unused). Currently supported OS and orchestrator configurations -- ubuntu : DCOS, Docker Swarm, Kubernetes; coreos : Kubernetes. Example of CoreOS Master with Windows and Linux (CoreOS and Ubuntu) Agents |
linuxProfile
provides the linux configuration for each linux node in the cluster
Name | Required | Description |
---|---|---|
adminUsername | yes | Describes the username to be used on all linux clusters |
ssh.publicKeys.keyData | yes | The public SSH key used for authenticating access to all Linux nodes in the cluster. Here are instructions for generating a public/private key pair |
secrets | no | Specifies an array of key vaults to pull secrets from and what secrets to pull from each |
secrets
details which certificates to install on the masters and nodes in the cluster.
A cluster can have a list of key vaults to install certs from.
On linux boxes the certs are saved on under the directory "/var/lib/waagent/". 2 files are saved per certificate:
{thumbprint}.crt
: this is the full cert chain saved in PEM format{thumbprint}.prv
: this is the private key saved in PEM format
Name | Required | Description |
---|---|---|
sourceVault.id | yes | The azure resource manager id of the key vault to pull secrets from |
vaultCertificates.certificateUrl | yes | Keyvault URL to this cert including the version |
format for sourceVault.id , can be obtained in cli, or found in the portal: /subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.KeyVault/vaults/{keyvaultname} |
format for vaultCertificates.certificateUrl
, can be obtained in cli, or found in the portal:
https://{keyvaultname}.vault.azure.net:443/secrets/{secretName}/{version}
servicePrincipalProfile
describes an Azure Service credentials to be used by the cluster for self-configuration. See service principal for more details on creation.
Name | Required | Description |
---|---|---|
clientId | yes, for Kubernetes clusters | describes the Azure client id. It is recommended to use a separate client ID per cluster |
secret | yes, for Kubernetes clusters | describes the Azure client secret. It is recommended to use a separate client secret per client id |
objectId | optional, for Kubernetes clusters | describes the Azure service principal object id. It is required if enableEncryptionWithExternalKms is true |
Here are the cluster definitions for apiVersion "2016-03-30". This matches the api version of the Azure Container Service Engine.
Name | Required | Description |
---|---|---|
apiVersion | yes | The version of the template. For "2016-03-30" the value is "2016-03-30" |
orchestratorProfile
describes the orchestrator settings.
Name | Required | Description |
---|---|---|
orchestratorType | yes | Specifies the orchestrator type for the cluster |
Here are the valid values for the orchestrator types:
DCOS
- this represents the DC/OS orchestrator.Swarm
- this represents the Swarm orchestrator.Kubernetes
- this represents the Kubernetes orchestrator.Swarm Mode
- this represents the Swarm Mode orchestrator.
masterProfile
describes the settings for master configuration.
Name | Required | Description |
---|---|---|
count | yes | Masters have count value of 1, 3, or 5 masters |
dnsPrefix | yes | The dns prefix for the masters FQDN. The master FQDN is used for SSH or commandline access. This must be a unique name. (bring your own VNET examples) |
For apiVersion "2016-03-30", a cluster may have only 1 agent pool profiles.
Name | Required | Description |
---|---|---|
count | yes | Describes the node count |
dnsPrefix | required if agents are to be exposed publically with a load balancer | The dns prefix that forms the FQDN to access the loadbalancer for this agent pool. This must be a unique name among all agent pools |
name | yes | The unique name for the agent pool profile. The resources of the agent pool profile are derived from this name |
vmsize | yes | Describes a valid Azure VM Sizes. These are restricted to machines with at least 2 cores |
linuxProfile
provides the linux configuration for each linux node in the cluster
Name | Required | Description |
---|---|---|
adminUsername | yes | Describes the username to be used on all linux clusters |
ssh.publicKeys[0].keyData | yes | The public SSH key used for authenticating access to all Linux nodes in the cluster. Here are instructions for generating a public/private key pair |
linuxProfile
provides AAD integration configuration for the cluster, currently only available for Kubernetes orchestrator.
Name | Required | Description |
---|---|---|
clientAppID | yes | Describes the client AAD application ID |
serverAppID | yes | Describes the server AAD application ID |
tenantID | no | Describes the AAD tenant ID to use for authentication. If not specified, will use the tenant of the deployment subscription |
A cluster can have 0 - N extensions in extension profiles. Extension profiles allow a user to easily add pre-packaged functionality into a cluster. An example would be configuring a monitoring solution on your cluster. You can think of extensions like a marketplace for acs clusters.
Name | Required | Description |
---|---|---|
name | yes | The name of the extension. This has to exactly match the name of a folder under the extensions folder |
version | yes | The version of the extension. This has to exactly match the name of the folder under the extension name folder |
extensionParameters | optional | Extension parameters may be required by extensions. The format of the parameters is also extension dependant |
rootURL | optional | URL to the root location of extensions. The rootURL must have an extensions child folder that follows the extensions convention. The rootURL is mainly used for testing purposes |
You can find more information, as well as a list of extensions on the extensions documentation.