We move this repositroy in maintenance mode for the following reasons:
- This was a MVP for an easy Kubernetes installation based on community tools like kubeadm and terraform
- The OpenStack integration is still brittle (e.g.
SecurityGroups
are not created in a reliable fasion) - Critical features like updates are missing
- The setup is very opinionated (but simple)
- There are other tools to bootstrap a Kubernetes cluster on OpenStack (even if these tools are often more complex)
For an alternative take a look at kube-spray, kubeone or one of the other Kubernetes boostraping tools.
TLDR: This repositroy deploys an opinionated Kubernetes cluster on OpenStack with kubeadm
and terraform
.
After cloning or downloading the repository, follow these steps to get your cluster up and running.
Take a look at the example provided in the example
folder. It contains three files: main.tf
, provider.tf
, and variables.tf
. Have a look at main.tf
. Customize settings like master_data_volume_size
or node_data_volume_size
to your needs, you might have to stay below quotas set by your OpenStack admin. Pick an instance flavor that has at least two vCPUs, otherwise kubeadm will fail during its pre-flight check.
We assume example
to be your working directory for all following commands.
The Kubernetes cluster will use Keystone authentication (over a WebHook). For mor details look at the official docs or just use the quick start:
VERSION=v1.19.0
OS=$(uname | tr '[:upper:]' '[:lower:]')
curl -sLO "https://github.com/kubernetes/cloud-provider-openstack/releases/download/${VERSION}/cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz"
tar xfz cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz
rm cloud-provider-openstack-${VERSION}-${OS}-amd64.tar.gz
mkdir $(pwd)/bin
cp ${OS}-amd64/client-keystone-auth $(pwd)/bin/
rm -rf ${OS}-amd64
As long as you keep the example
folder inside the module repository, the reference source = "../"
in the main.tf
works. For a cleaner setup, you can also extract the example folder and put it somewhere else, just make sure you change the source setting accordingly. You can also reference the GitHub repository itself like so:
module "my_cluster" {
source = "git::https://github.com/inovex/kubernetes-on-openstack.git?ref=v1.0.0"
# ...
}
If you do it that way, make sure to
terraform get --update
before running any other terraform commands.
There a multiple different ways to authenticate with your OpenStack provider that all have their pros and cons. If you want to know more, check out this blog post about OpenStack credential handling for terraform. You can choose any of them, as long as you make sure the terraform variables auth_url
, username
and password
are set explicitly as terraform variables. This is required as they are passed down to the Openstack Cloud Controller running inside the provisioned Kubernetes. Those should be dedicated service account credentials in a team setup. The easiest way to get started is to create a terraform.tfvars
file in the example
folder. If you start working in a team setup, you might want to check out the method using clouds-public.yaml
, clouds.yaml
and secure.yaml
files in the aforementioned blog post.
Initialize the folder and run plan
:
terraform init
terraform plan
Now you can create the cluster by running
terraform apply
It takes some time for the nodes to be fully configured. After running terraform apply
there will be a kubeconfig file configured for the newly created cluster. The --insecure-skip-tls-verify=true
in there is needed because we use the auto-generated certificates of kubeadm. There are possible workarounds to remove the flag (e.g. fetch the CA from the Kubernetes master, see below). Keep in mind: As a default all users in the (OpenStack) project will have cluster-admin
rights. You can access the cluster via
kubectl --kubeconfig kubeconfig get nodes
It is also possible to set the KUBECONFIG
environment variable to reference the location of the kubeconfig
file created by terraform or to copy its contents to your .kube
settings but keep in mind that the kubeconfig changes often because of Floating IPs.
To create a simple deployment, run
kubectl --kubeconfig kubeconfig create deployment nginx --image=nginx
kubectl --kubeconfig kubeconfig expose deployment nginx --port=80
In the current setup the master node can be reached by
ssh ubuntu@<ip>
and can also be used as jumphost to access the worker nodes:
ssh -J ubuntu@<ip> ubuntu@node-0
In order to prevent to use insecure-skip-tls-verify=true
you can fetch the cluster CA:
export MASTER_IP=""
export CLUSTER_CA=$(curl -sk "https://${MASTER_IP}:6443/api/v1/namespaces/kube-public/configmaps/cluster-info" | jq -r '.data.kubeconfig' | grep -o 'certificate-authority-data:.*' | awk '{print $2}')
# ${CLUSTER_NAME} must match the name provided in the terraform.tfvars
export CLUSTER_NAME=""
kubectl --kubeconfig ./kubeconfig config set clusters.${CLUSTER_NAME}.certificate-authority-data ${CLUSTER_CA}
kubectl --kubeconfig ./kubeconfig config set clusters.${CLUSTER_NAME}.insecure-skip-tls-verify false
unset CLUSTER_CA
unset MASTER_IP
unset CLUSTER_NAME
In order to create a shared Kubernetes cluster for multiple users we can use application credentials
openstack --os-cloud <cloud> --os-project-id=<project-id> application credential create --restricted kubernetes
more docs will follow when the feature is merged.
If you want to use containerd in version 1.2.2 you will probably face this containerd issue if you use images from quay.io