Skip to content

Latest commit

 

History

History
167 lines (132 loc) · 6.1 KB

PROVISION.md

File metadata and controls

167 lines (132 loc) · 6.1 KB

Provision

Get the distribution

You will need to run the installer either from a Linux machine or from a Darwin (OSX) machine. This machine will need to be able to access via SSH all the machines that will become nodes of the Kubernetes cluster.

The installer can run from a machine that will become a node on the cluster, but since the installer's machine holds secrets (such as SSH/SSL keys and certificates), it's best to run from a machine with limited user access and an encrypted disk.

The machine the installer is run from should be available for future modifications to the cluster (adding and removing nodes, upgrading Kubernetes).

Download the installer

To unpack the installer from Linux

From an ssh session, type:

curl -L https://kismatic-installer.s3-accelerate.amazonaws.com/latest/kismatic.tar.gz | tar -zx

To unpack from Darwin (Mac OSX)

From a terminal window, type

curl -L https://kismatic-installer.s3-accelerate.amazonaws.com/latest-darwin/kismatic.tar.gz | tar -zx

Generate A Plan File

From the machine you installed Kismatic to, run the following:

./kismatic install plan

You will be asked a few questions regarding the decisions you made in the Plan section above. The kismatic installer will then generate a kismatic-cluster.yaml file.

As machines are being provisioned, you must record their identity and credentials in this file.

Create Virtual Machines

Providing access to the Installer

To install Kubernetes with Kismatic, you will need a user with remote sudo access and an ssh public key added to each node. The same username and keypair must be used for all nodes. This account should only be used by the kismatic installer.

We suggest a default user kismaticuser, with a corresponding private key **kismaticuser.key **added to the directory you're running the installer from. This would be the ideal spot to generate the keypair via:

ssh-keygen -t rsa -b 4096 -f kismaticuser.key -P ""

The resulting kismaticuser.pub will need to be copied to each node. ssh-copy-id can be convenient for this, or you can simply copy its contents to ~/.ssh/idrsa

There are four pieces of information we will need to be able to address each node:

hostname This is a short name that machines can access each other with. If you opt for Kismatic to manage host files for your cluster, this name will be copied to host files.
ip This is the ip that the installer should connect to your node with. If you don't specify a separate internal_ip for the node, the ip will be used for cluster traffic as well
internal_ip
(optional)
In many cases nodes will have more than one physical network card or more than one ip address. Specifying an internal IP address allows you to route traffic over a specific network. It's best for Kubernetes components to communicate with each other via a local network, rather than over the internet.
labels
(optional)
With worker nodes, labels allow you to identify details of the hardware that you may want to be available to Kubernetes to aid in scheduling decisions. For example, if you have worker nodes with GPUs and worker nodes without, you may want to tag the nodes with GPUs.

Pre-Install Configuration

By default, Kismatic will attempt to install any of the software packages below if missing. This can cause the installer to take significantly more time and use more bandwidth than pre-baking an image and will require internet access to the Kismatic package repository, DockerHub and a repository for your operating system.

If you are building a large cluster or one that won't have access to these repositories, you will want to cache the necessary packages in a repository on your network.

Requirement Required for etcd master worker
user and public key installed on all nodes Access from kismatic to manage node yes yes yes
/etc/ssh/sshd_config contains `PubkeyAuthentication yes` Access from kismatic to manage node yes yes yes
Access to an apt or yum repository Retrieving binaries over the internet during installation yes yes yes
Python 2.7 Kismatic management of nodes yes yes yes
Kismatic package of Docker 1.11.2 hosting containers yes yes
Kismatic package of Calico 0.22.0 inter-pod networking yes yes
Kismatic package of Etcd 2.3.7 and 3.0.13 inter-pod networking yes
Kismatic package of Calico 0.22.0 inter-pod networking yes yes
Kismatic package of Kubernetes Master 1.4.5-1 Kubernetes yes
Kismatic package of Kubernetes Worker 1.4.5-1 Kubernetes yes

Inspector

To double check that your nodes are fit for purpose, you can run the kismatic inspector. This tool will be run on each node as part of validating your cluster and network fitness prior to installation.

Networking

Enter your network settings in the plan file, including

  • pod networking technique (routed or overlay)
  • CIDR ranges for pod and services networks
  • whether the Kismatic installer should manage hosts files for your cluster

Create your DNS CNAME or load balancer alias for your Kubernetes master nodes based on their hostnames.