Skip to content

Latest commit

 

History

History
188 lines (138 loc) · 5.26 KB

README.md

File metadata and controls

188 lines (138 loc) · 5.26 KB

Xcluster ovl - Kubernetes

A Kubernetes cluster with bridge CNI-plugin.

This overlay provides a platform with fast turn-around times and very flexible network setup. The main purpose is development and trouble shooting of Kubernetes network functions. This is not a generic Kubernetes cluster that is suitable for any purpose, for instance application development. There are better alternatives for application development like;

Basic Usage

Prerequisiste; environment for starting xcluster without K8s is setup.

To setup the environment source the Envsettings.k8s file;

$ cd xcluster
$ . ./Envsettings.k8s

The image is not readable [/home/guest/xcluster/workspace/xcluster/hd-k8s.img] 

Please follow the instructions at;
https://github.com/Nordix/xcluster#xcluster-with-kubernetes

Example;
armurl=http://artifactory.nordix.org/artifactory/cloud-native
curl -L $armurl/xcluster/images/hd-k8s.img.xz | xz -d > $__image

Pre-built images for K8s on xcluster are provided, please see the wiki. When a hd-k8s.img has been downloaded start (and stop) a cluster with;

xc mkcdrom; xc start
# test something...
xc stop

The "standard" cluster is started with 4 nodes and 2 "routers". Xterm windows are started as consoles for all VMs. In a node console xterm (green) test K8s things, for instance;

kubectl get nodes   # (take some time to see the nodes)
kubectl get pods -A
kubectl wait -A --timeout=150s --for condition=Ready --all pods

The xterm consoles are not necessary and may soon feel annoying. Then start xcluster "in background". The consoles are still available via Gnu screen;

xc mkcdrom; xc starts

Note the tailing "s" in "starts" and that you don't have to stop xcluster before starting it again, a running xcluster will automatically be stopped before the new one is started.

To get a terminal window to a VM use the vm command (a bash function actually);

vm 1

An xterm window will pop-up with a teminal to the VM logged in as "root".

Some base images for startup an tests are "pre-pulled". These can be used off-line. A basic alpine image for general tests and a mconnect image for load-balancing test are provided;

vm 1
# On vm-001;
images  # (alias to print loaded images)
kubectl create -f /etc/kubernetes/alpine.yaml
kubectl get pods
kubectl exec -it (an-alpine-pod) sh
kubectl create -f /etc/kubernetes/mconnect.yaml
kubectl get pods
kubectl get svc
mconnect -address mconnect.default.svc.xcluster:5001 -nconn 100

Single-stack

K8s in dual-stack mode is started by default. If you want to start K8s in single-stack mode do;

SETUP=ipv4 xc mkcdrom kubernetes; xc starts
# or;
SETUP=ipv6 xc mkcdrom kubernetes; xc starts

Service Account

To access the API from within a pod a Service Account must be used. This is not easy but reading about others problems helps.

Some keys and certificates must be generated. A good instruction can be found here. Certificates are stored in git but can be re-generated with;

./kubernetes.sh ca

Problems

In no particular order.

Security or lack threreof

It is actually hard to configure Kubernetes without security.

kubernetes/client-go#314

Access to the API from within a pod uses the secure port. An API token is needed and some x509 stuff; 1287

The random problem

This showed up as a real problem in linux-4.17.

The kube-apiserver uses /dev/random this blocks until enough "enthropy" has been collected which is ~3-4 minutes on an xcluster VM.

There is a virtual device that make the host /dev/random be used in the VMs.

Enable the kernel config;

Character devices >
  Hardware Random Number Generator Core support >
    VirtIO Random Number Generator support 

Then configure it in the kvm startup. We can't use the host /dev/random since it drains too fast and blocks, but we can use /dev/urandom;

__kvm_opt+=" -object rng-random,filename=/dev/urandom,id=rng0"
__kvm_opt+=" -device virtio-rng-pci,rng=rng0,max-bytes=1024,period=80000"
export __kvm_opt

HugeTLB

From K8s v1.18.x cgroups for huge-tables is required.

> grep HUGE config/linux-5.4.2
CONFIG_CGROUP_HUGETLB=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
# CONFIG_TRANSPARENT_HUGEPAGE is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y

Seccomp

From K8s v1.19.x seccomp must be enabled in the kernel.