In standard docker kubernetes cluster, kubelet is running on each node as systemd service and is taking care of communication between runtime and api service.
It is reponsible for starting microservices pods (such as kube-proxy
, kubedns
, etc. - they can be different for various ways of deploying k8s) and user pods.
Configuration of kubelet determines which runtime is used and in what way.
Kubelet itself is executed in docker container (as we can see in kubelet.service
), but, what is important, it's not a kubernetes pod (at least for now),
so we can keep kubelet running inside container (as well as directly on the host), and regardless of this, run pods in chosen runtime.
Below, you can find an instruction how to switch one or more nodes on running kubernetes cluster from docker to CRI-O.
You must prepare and install crio
on each node you would like to switch. Here's the list of files that must be provided:
File path | Description | Location |
---|---|---|
/etc/crio/crio.conf |
crio configuration | Generated on cri-o make install |
/etc/crio/seccomp.conf |
seccomp config | Example stored in cri-o repository |
/etc/containers/policy.json |
containers policy | Example stored in cri-o repository |
/bin/{crio, runc} |
crio and runc binaries |
Built from cri-o repository |
/usr/local/libexec/crio/conmon |
conmon binary |
Built from cri-o repository |
/opt/cni/bin/{flannel, bridge,...} |
CNI plugins binaries | Can be built from sources containernetworking/cni |
/etc/cni/net.d/10-mynet.conf |
Network config | Example stored in README file |
crio
binary can be executed directly on host, inside the container or in any way.
However, recommended way is to set it as a systemd service.
Here's the example of unit file:
# cat /etc/systemd/system/crio.service
[Unit]
Description=CRI-O daemon
Documentation=https://github.com/kubernetes-incubator/cri-o
[Service]
ExecStart=/bin/crio --runtime /bin/runc --log /root/crio.log --log-level debug
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
At first, you need to stop kubelet service working on the node:
# systemctl stop kubelet
and stop all kubelet docker containers that are still runing.
# docker stop $(docker ps | grep k8s_ | awk '{print $1}')
We have to be sure that kubelet.service
will start after crio.service
.
It can be done by adding crio.service
to Wants=
section in /etc/systemd/system/kubelet.service
:
# cat /etc/systemd/system/kubelet.service | grep Wants
Wants=docker.socket crio.service
If you'd like to change the way of starting kubelet (e.g. directly on host instead of docker container), you can change it here, but, as mentioned, it's not necessary.
Kubelet parameters are stored in /etc/kubernetes/kubelet.env
file.
# cat /etc/kubernetes/kubelet.env | grep KUBELET_ARGS
KUBELET_ARGS="--pod-manifest-path=/etc/kubernetes/manifests
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0
--cluster_dns=10.233.0.3 --cluster_domain=cluster.local
--resolv-conf=/etc/resolv.conf --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml
--require-kubeconfig"
You need to add following parameters to KUBELET_ARGS
:
--experimental-cri=true
- Use Container Runtime Interface. Will be true by default from kubernetes 1.6 release.--container-runtime=remote
- Use remote runtime with provided socket.--container-runtime-endpoint=/var/run/crio.sock
- Socket for remote runtime (defaultcrio
socket localization).--runtime-request-timeout=10m
- Optional but useful. Some requests, especially pulling huge images, may take longer than default (2 minutes) and will cause an error.
Kubelet is prepared now.
If your cluster is using flannel network, your network configuration should be like:
# cat /etc/cni/net.d/10-mynet.conf
{
"name": "mynet",
"type": "flannel"
}
Then, kubelet will take parameters from /run/flannel/subnet.env
- file generated by flannel kubelet microservice.
Start crio first, then kubelet. If you created crio
service:
# systemctl start crio
# systemctl start kubelet
You can follow the progress of preparing node using kubectl get nodes
or kubectl get pods --all-namespaces
on kubernetes master.