Skip to content

Commit 9e8d560

Browse files
committed
Blog post about Dapr on Raspberry Pi cluster.
1 parent 7fea2e8 commit 9e8d560

File tree

3 files changed

+258
-0
lines changed

3 files changed

+258
-0
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,4 @@
44
node_modules/
55
daprblog/public
66
daprblog/resources/_gen
7+
**/.DS_Store
Lines changed: 257 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,257 @@
1+
---
2+
date: "2020-10-30T22:20:00-07:00"
3+
title: "Dapr on Raspberry Pi with K3s"
4+
linkTitle: "Dapr on Raspberry Pi with K3s"
5+
author: "[Artur Souza](https://github.com/artursouza)"
6+
type: blog
7+
---
8+
9+
Since its announcement over one year ago, Dapr has shown how it can expedite development of cloud native applications via a standard API for common building blocks like pubsub, bindings or method invocation, for example. It is well known that Dapr can be deployed to Kubernetes running via [Minikube](https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-minikube/), [kind](https://github.com/dapr/dapr/pull/2144), or cloud providers like Azure's [AKS](https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-aks/). It has also been demonstrated how [Dapr can run on a Kubernetes cluster of Raspberry Pis](https://youtu.be/LAUDVk8PaCY?t=1251).
10+
11+
This post will explain how to deploy Dapr on [Rancher's K3s Kubernetes](https://rancher.com/docs/k3s/latest/en/) on a cluster of Rasperry Pis, showcasing an example of deployment for edge computing.
12+
13+
## Why?
14+
15+
I have used Dapr via Minikube, kind and AKS but I wanted to learn how to setup an on-prem Kubernetes cluster "from scratch" and use Dapr to validate it. In this case, "from scratch" meant purchasing and provisioning the hardware, in addition to installing Kubernetes. The setup also must have more than one node, to uncover challenges of provisioning multiple computers. For this exercise, I used 4 computers (or nodes).
16+
17+
## Planning
18+
19+
Purchasing the hardware was a no brainer, Raspberry Pi would be the go-to solution for a cheap DIY project. I picked the Raspberry Pi 4 with 4GB of RAM since it would give a good memory to CPU ratio: 1x1.5GHz core per GB of RAM. The 2GB version would not give enough memory per node (IMO). Apart from the higher price tag (x4 nodes), the 8GB version seems overkill for this as I guessed that under heavy load (too many pods), the nodes would run hot on CPU utilization before using all the memory. This is a "guestimate", so you can still decide to go with 2GB or 8GB version.
20+
21+
The details of the purchase of Raspberry Pi computer and accessories is outside the scope of this post. There are many blog posts detailing the hardware purchase experience. On the other hand, there are a few items that proved valuable for a hardware setup that are worth mentioning:
22+
23+
* Raspberry Pi can run hot even when idle, so it is recommended to install a cooling fan and heatsink.
24+
* For Rasberry Pi 4 model B, make sure to have an USB-C power supply with an output of 5V and 3A.
25+
* In case of multiple Raspberry Pis:
26+
- Make sure the surge protector that can plug all the power supplies. A power strip might not work since the power supplies might not all fit next to each other. A cube shaped power supply might be more practical.
27+
- Check if there are enough ethernet ports in your router for all the computers. An ethernet switch might be needed.
28+
29+
For the Operating System, I picked Ubuntu Server 20.04.1 LTS because it was a well known distribution with a 64 bits server (not desktop) version. At the time of this writing, the Raspberry Pi OS was only in 32 bits. Although no process will address more than 4 GB of RAM, I still want to validate ARM64 images on Dapr. [Flashing the OS image](https://www.raspberrypi.org/documentation/installation/installing-images/) and setting up the static IP + hostname + SSH was a manual step. It can still be automated to some extent but each node needs an unique configuration and each flash card will need to be individually flashed and inserted on the computer.
30+
31+
Next decision was which Kubernetes installation to go with. I was aware of [Rancher's K3s Kubernetes](https://rancher.com/docs/k3s/latest/en/) but also got to read about [Ubuntu's Microk8s](https://microk8s.io) and there is a [blog post on how to build a Raspberry Pi cluster with MicroK8s](https://ubuntu.com/blog/building-a-raspberry-pi-cluster-with-microk8s). Eventually I found Rancher's [Ansible Playbook for K3s](https://github.com/rancher/k3s-ansible) on how to install it on all nodes with minimal manual configuration. Although manually setting up 4 nodes is not the end of the world, I wanted to make the setup as automated as possible.
32+
33+
For those new to [Ansible](https://www.ansible.com) (like me), it automates provisoning and configuration, enabling infrastructure as code. I like Ansible for this job because it only requires the nodes to be accessible via ssh with private key authentication - no need to install an agent or anything else on the nodes. The only downside is that it does not work for Windows, so you need a Linux or MacOS host to kick off the Ansible Playbook.
34+
35+
{{< imgproc picluster Resize "600x" >}}
36+
Raspberry Pi cluster of 4
37+
{{< /imgproc >}}
38+
39+
## Pre-requisites
40+
41+
### Rasberry Pi or ARM64 emulator
42+
43+
As mentioned before, you can use one or more [Rasperry Pi](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/?resellerType=home&variant=raspberry-pi-4-model-b-4gb) computers. Alternativelly, you can use [QEMU](https://www.qemu.org/) to emulate a Raspberry Pi computer. The computers (or virtual machines) must:
44+
45+
* Have GNU/Linux installed. Instructions here assume [Ubuntu Server 20.04.1 LTS](https://ubuntu.com/download/raspberry-pi)
46+
* Be Accessible via [ssh](https://www.raspberrypi.org/documentation/remote-access/ssh/README.md) and [authenticated via authorized key](https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-20-04).
47+
* Static IP (or static DHCP IP lease) configured on all servers. Instructions here assume: 192.168.1.101 for master, 192.168.1.[102-104] for worker nodes.
48+
49+
#### External References
50+
51+
The following links can serve as a starting point to learn how to build your physical or virtual cluster. These are examples only and are not necessarily endorced by me.
52+
53+
* [Build a Raspberry Pi cluster computer](https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-cluster-computer) by The MagPi Magazine
54+
* [Raspberry Pi Cluster Emulation With Docker Compose](https://appfleet.com/blog/raspberry-pi-cluster-emulation-with-docker-compose/) by appFleet
55+
56+
### GNU/Linux or MacOS computer
57+
58+
Because the setup requires Ansible, Windows is not supported - only GNU/Linux or MacOS. Windows users can still follow instructions via a Virtual Machine running GNU/Linux.
59+
60+
Install the following on your desktop:
61+
62+
* [Git](https://git-scm.com/downloads)
63+
* [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)
64+
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
65+
* [Dapr Cli](https://docs.dapr.io/getting-started/install-dapr/#install-the-dapr-cli)
66+
67+
## Step 1: Configure the Ansible Playbook
68+
69+
Clone the Ansible Playbook:
70+
71+
```sh
72+
git clone [email protected]:rancher/k3s-ansible.git
73+
```
74+
75+
Optionally, reset to the same revision used to create these instructions:
76+
```sh
77+
git checkout 721c3487027e42d30c60eb206e0fb5abfddd094f
78+
```
79+
80+
Then, configure your cluster. Your IP addresses might be different from the configuration below:
81+
82+
```
83+
cp -R inventory/sample inventory/my-cluster
84+
cat << EOF | tee inventory/my-cluster/hosts.ini
85+
[master]
86+
192.168.1.101
87+
88+
[node]
89+
192.168.1.[102:104]
90+
91+
[k3s_cluster:children]
92+
master
93+
node
94+
EOF
95+
```
96+
97+
Update the username used in the cluster. Default is `debian`. Since we are using Ubuntu, this will be updated to `ubuntu`:
98+
99+
On GNU/Linux:
100+
```sh
101+
sed -i 's/debian/ubuntu/' inventory/my-cluster/group_vars/all.yml
102+
```
103+
104+
On MacOS:
105+
```sh
106+
sed -i .bak 's/debian/ubuntu/' inventory/my-cluster/group_vars/all.yml
107+
```
108+
109+
## Step 2: Install K3s via Ansible Playbook
110+
111+
Run the Ansible Playbook to have K3s installed on your cluster.
112+
```sh
113+
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
114+
```
115+
116+
Once completed, you shold see an output that ends like this:
117+
```txt
118+
===============================================================================
119+
k3s/master : Enable and check K3s service ------------------------------------------------------------------------------------------------------------------------------- 24.58s
120+
Gathering Facts --------------------------------------------------------------------------------------------------------------------------------------------------------- 10.48s
121+
k3s/node : Enable and check K3s service ---------------------------------------------------------------------------------------------------------------------------------- 8.49s
122+
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------- 8.07s
123+
Gathering Facts ---------------------------------------------------------------------------------------------------------------------------------------------------------- 7.13s
124+
download : Download k3s binary arm64 ------------------------------------------------------------------------------------------------------------------------------------- 6.76s
125+
k3s/master : Change file access node-token ------------------------------------------------------------------------------------------------------------------------------- 3.53s
126+
k3s/master : Copy K3s service file --------------------------------------------------------------------------------------------------------------------------------------- 2.03s
127+
raspberrypi : Test for raspberry pi /proc/device-tree/model -------------------------------------------------------------------------------------------------------------- 1.91s
128+
k3s/node : Copy K3s service file ----------------------------------------------------------------------------------------------------------------------------------------- 1.86s
129+
k3s/master : Replace https://localhost:6443 by https://master-ip:6443 ---------------------------------------------------------------------------------------------------- 1.85s
130+
k3s/master : Copy config file to user home directory --------------------------------------------------------------------------------------------------------------------- 1.35s
131+
k3s/master : Read node-token from master --------------------------------------------------------------------------------------------------------------------------------- 1.17s
132+
k3s/master : Wait for node-token ----------------------------------------------------------------------------------------------------------------------------------------- 1.13s
133+
k3s/master : Create crictl symlink --------------------------------------------------------------------------------------------------------------------------------------- 1.10s
134+
k3s/master : Restore node-token file access ------------------------------------------------------------------------------------------------------------------------------ 1.07s
135+
k3s/master : Create directory .kube -------------------------------------------------------------------------------------------------------------------------------------- 1.01s
136+
k3s/master : Create kubectl symlink -------------------------------------------------------------------------------------------------------------------------------------- 1.01s
137+
raspberrypi : Test for raspberry pi /proc/cpuinfo ------------------------------------------------------------------------------------------------------------------------ 0.96s
138+
prereq : Enable IPv6 forwarding ------------------------------------------------------------------------------------------------------------------------------------------ 0.92s
139+
```
140+
141+
Copy the cluster config from the master node (your cluster's master IP address might differ).
142+
```sh
143+
scp [email protected]:~/.kube/config ~/.kube/piconfig
144+
```
145+
146+
Export the environment variable below to use the new cluster:
147+
```sh
148+
export KUBECONFIG=~/.kube/piconfig
149+
```
150+
151+
Optionally, you can merge `~/.kube/piconfig` into `~/.kube/config`:
152+
```sh
153+
KUBECONFIG=~/.kube/config:~/.kube/piconfig kubectl config view --flatten | tee ~/.kube/config && kubectl config use-context default
154+
```
155+
156+
Finally, list all the nodes in the new cluster:
157+
```sh
158+
kubectl get nodes
159+
```
160+
161+
Output should be similar to this (depending on your cluster setup):
162+
```txt
163+
NAME STATUS ROLES AGE VERSION
164+
raspi-003 Ready <none> 11m v1.17.5+k3s1
165+
raspi-002 Ready <none> 11m v1.17.5+k3s1
166+
raspi-004 Ready <none> 11m v1.17.5+k3s1
167+
raspi-001 Ready master 11m v1.17.5+k3s1
168+
```
169+
170+
## Step 3: Optionally, install Kubernetes Dashboard
171+
172+
Deploy Kubernetes Dashboard:
173+
```sh
174+
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
175+
```
176+
177+
Create credentials for Kubernetes Dashboard:
178+
```sh
179+
cat <<EOF | kubectl apply -f -
180+
apiVersion: v1
181+
kind: ServiceAccount
182+
metadata:
183+
name: admin-user
184+
namespace: kubernetes-dashboard
185+
EOF
186+
187+
cat <<EOF | kubectl apply -f -
188+
apiVersion: rbac.authorization.k8s.io/v1
189+
kind: ClusterRoleBinding
190+
metadata:
191+
name: admin-user
192+
roleRef:
193+
apiGroup: rbac.authorization.k8s.io
194+
kind: ClusterRole
195+
name: cluster-admin
196+
subjects:
197+
- kind: ServiceAccount
198+
name: admin-user
199+
namespace: kubernetes-dashboard
200+
EOF
201+
```
202+
203+
Copy the token displayed by the command below into your clipboard:
204+
```sh
205+
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
206+
```
207+
208+
On a new terminal, start Kubernetes proxy:
209+
```sh
210+
KUBECONFIG=~/.kube/piconfig kubectl proxy
211+
# Note: environment variable KUBECONFIG is probably not setup in a new terminal window.
212+
```
213+
214+
Now, open the following URL on your browser: [http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
215+
216+
On the first screen on the Kubernetes Dashboard, you are expected to paste the token previously copied.
217+
218+
## Step 4: Install Dapr and Apps
219+
220+
Deploy Dapr on your cluster:
221+
```sh
222+
dapr init -k
223+
```
224+
225+
After a few minutes, check the status (dapr-operator might take longer than the other services to be healthy):
226+
```sh
227+
dapr status -k
228+
```
229+
230+
```txt
231+
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
232+
dapr-placement dapr-system True Running 1 0.11.3 1m 2020-10-23 15:56.15
233+
dapr-sidecar-injector dapr-system True Running 1 0.11.3 1m 2020-10-23 15:56.15
234+
dapr-dashboard dapr-system True Running 1 0.3.0 1m 2020-10-23 15:56.15
235+
dapr-operator dapr-system True Running 1 0.11.3 1m 2020-10-23 15:56.15
236+
dapr-sentry dapr-system True Running 1 0.11.3 1m 2020-10-23 15:56.15
237+
```
238+
239+
On a separate terminal, open Dapr dashboard, so you can check the status of the apps on Dapr.
240+
```sh
241+
KUBECONFIG=~/.kube/piconfig dapr dashboard -k
242+
# Note: environment variable KUBECONFIG is probably not setup in a new terminal window.
243+
```
244+
245+
Back on your first terminal window, follow [these instructions](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) to deploy an app using Dapr on Kubernetes. Don't forget to check back on the Dapr Dashboard on your browser's window.
246+
247+
## Step 5: Clean Up
248+
249+
In case you don't want to keep this setup in your cluster (or want to redo it), uninstall K3s with the following Ansible Playbook:
250+
251+
```sh
252+
ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini
253+
```
254+
255+
## Thank You
256+
257+
Thanks a lot for trying Dapr on Raspberry Pi with K3s.
Loading

0 commit comments

Comments
 (0)