Not found
-Oops! This page doesn't exist. Try going back to our home page.
- -You can learn how to make a 404 page like this in Custom 404 Pages.
-- -
- -### Why Peer-to-Peer? - -Kairos has chosen Peer-to-Peer as an internal component to enable automatic coordination of Kairos nodes. To understand why [EdgeVPN](https://github.com/mudler/edgevpn) has been selected, see the comparison table below, which compares EdgeVPN with other popular VPN solutions: - -| | Wireguard | OpenVPN | EdgeVPN | -|------|-----------|-------------|----------------------------------------------------| -| Memory Space | Kernel-module | Userspace | Userspace | -| Protocol | UDP | UDP, TCP | TCP, UDP/quick, UDP, ws, everything supported by libp2p | -| P2P | Yes | Yes | Yes | -| Fully meshed | No | No | Yes | -| Management Server (SPOF) | Yes | Yes | No | -| Self-coordinated | No | No | Yes | - -Key factors, such as self-coordination and the ability to share metadata between nodes, have led to the selection of EdgeVPN. However, there are tradeoffs and considerations to note in the current architecture, such as: - -- Routing all traffic to a VPN can introduce additional latency -- Gossip protocols can be chatty, especially if using DHT, creating VPNs that span across regions -- EdgeVPN is in user-space, which can be slower compared to kernel-space solutions such as Wireguard -- For highly trafficked environments, there will be an increase in CPU usage due to the additional encryption layers introduced by EdgeVPN - -Nonetheless, these tradeoffs can be overcome, and new features can be added due to EdgeVPN's design. For example: - -- There is no need for any server to handle traffic (no SPOF), and no additional configuration is necessary -- The p2p layer is decentralized and can span across different networks by using DHT and a bootstrap server -- Self-coordination simplifies the provisioning experience -- Internal cluster traffic can also be offloaded to other mechanisms if network performance is a prerequisite -- For instance, with [KubeVIP](/docs/examples/multi-node-p2p-ha-kubevip), new nodes can join the network and become cluster members even after the cluster provisioning phase, making EdgeVPN a scalable solution. - -### Why a VPN ? - -A VPN allows for the configuration of a Kubernetes cluster without depending on the underlying network configuration. This design model is popular in certain use cases at the edge where fixed IPs are not a viable solution. We can summarize the implications as follows: - -| | K8s Without VPN | K8s With VPN | -|----------|---------------------|----------------------------------------------------------| -| IP management | Needs to have static IP assigned by DHCP or manually configured (can be automated) | Automatically coordinated Virtual IPs for nodes. Or manually assign them | -| Network Configuration | `etcd` needs to be configured with IPs assigned by your network/fixed | Automatically assigned, fixed VirtualIPs for `etcd`. | -| Networking | Cluster IPs, and networking is handled by CNIs natively (no layers) | Kubernetes Network services will have Cluster IPs sitting below the VPN.- -
- -While participating actively on a network, each node keeps the shared ledger up-to-date with information about itself and how to be reached by advertizing its own IP and the libp2p identity, allowing nodes to discover each other and how to route packets. - -Assuming that we want to establish an SSH connection from Node A to Node B through the VPN network, which exposes the `sshd` service, the process is as follows: - -1. Node A (`10.1.0.1`) uses `ssh` to dial the VirtualIP of the Node B (`10.1.0.2`) in the network. -2. EdgeVPN reads the frame from the TUN interface. -3. If EdgeVPN finds a match in the ledger between the VirtualIP and an associated Identity, it opens a p2p stream to Node B using the libp2p Identity. -4. Node B receives the incoming p2p stream from EdgeVPN. -5. Node B performs a lookup in the shared ledger. -6. If a match is found, Node B routes the packet back to the TUN interface, up to the application level. - -### Controller - -A set of Kubernetes Native Extensions ([Entangle](/docs/reference/entangle)) provides peer-to-peer functionalities also to existing clusters by allowing to bridge connection with the same design architecture described above. - -It can be used to: - -- Bridge services between clusters -- Bridge external connections to cluster -- Setup EdgeVPN as a daemonset between cluster nodes - -See also the Entangle [documentation](/docs/reference/entangle) to learn more about it. - -## Benefits - -- -
- -The use of p2p technology to enable self-coordination of Kubernetes clusters in Kairos offers a number of benefits: - -1. **Simplified deployment**: Deploying Kubernetes clusters at the edge is greatly simplified. Users don’t need to specify any network settings or use a control management interface to set up and manage their clusters. -1. **Easy customization**: Kairos offers a highly customizable approach to deploying Kubernetes clusters at the edge. Users can choose from a range of meta distributions, including openSUSE, Ubuntu, Alpine and [many others](/docs/reference/image_matrix), and customize the configuration of their clusters as needed. -1. **Automatic coordination**: With Kairos, the coordination of Kubernetes clusters is completely automated. The p2p network is used as a coordination mechanism for the nodes, allowing them to communicate and coordinate with each other without the need for any external management interface. This means that users can set up and manage their Kubernetes clusters at the edge with minimal effort, freeing up their time to focus on other tasks. -1. **Secure and replicated**: The use of rendezvous points and a shared ledger, encrypted with AES and rotated via OTP, ensures that the p2p network is secure and resilient. This is especially important when deploying Kubernetes clusters at the edge, where network conditions can be unpredictable. -1. **Resilient**: Kairos ensures that the cluster remains resilient, even in the face of network disruptions or failures. By using VirtualIPs, nodes can communicate with each other without the need for static IPs, and the cluster's etcd database remains unaffected by any disruptions. -1. **Scalable**: Kairos is designed to be highly scalable. With the use of p2p technology, users can easily add or remove nodes from the cluster, without the need for any external management interface. -By leveraging p2p technology, Kairos makes it easy for users to deploy and manage their clusters without the need for complex network configurations or external management interfaces. The cluster remains secure, resilient, and scalable, ensuring that it can handle the challenges of deploying Kubernetes at the edge. - -## Conclusions - -In conclusion, Kairos offers an innovative approach to deploying and managing Kubernetes clusters at the edge. By leveraging peer-to-peer technology, Kairos eliminates the need for a control management interface and enables self-coordination of clusters. This makes it easier to deploy and manage Kubernetes clusters at the edge, saving users time and effort. - -The use of libp2p, shared ledger, and OTP for bootstrapping and coordination thanks to [EdgeVPN](https://github.com/mudler/edgevpn) make the solution secure and resilient. Additionally, the use of VirtualIPs and the option to establish a TUN interface ensures that the solution is flexible and can be adapted to a variety of network configurations without requiring exotic configurations. - -With Kairos, users can boost large-scale Kubernetes adoption at the edge, achieve zero-touch configuration, and have their cluster's lifecycle completely managed, all while enjoying the benefits of self-coordination and zero network configuration. This allows users to focus on running and scaling their applications, rather than worrying about the complexities of managing their Kubernetes clusters. - diff --git a/docs/content/en/docs/Development/_index.md b/docs/content/en/docs/Development/_index.md deleted file mode 100644 index f368ded63..000000000 --- a/docs/content/en/docs/Development/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Development" -linkTitle: "Development" -weight: 7 -description: > ---- diff --git a/docs/content/en/docs/Development/debugging-station.md b/docs/content/en/docs/Development/debugging-station.md deleted file mode 100644 index ead7ac92f..000000000 --- a/docs/content/en/docs/Development/debugging-station.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: "Debugging station" -linkTitle: "Debugging station" -weight: 4 -date: 2023-03-15 -description: > - Debugging station ---- - -When developing or troubleshooting Kairos, it can be useful to share a local cluster with another peer. This section illustrates how to use [Entangle](/docs/reference/entangle) to achieve that. We call this setup _debugging-station_. - -## Configuration - - -{{% alert title="Note" color="warning" %}} - -This section describes the configuration step by step. If you are in a hurry, you can skip this section and directly go to **Deploy with AuroraBoot**. - -{{% /alert %}} - -When deploying a new cluster, we can use [Bundles](/docs/advanced/bundles) to install the `entangle` and `cert-manager` chart automatically. We specify the bundles in the cloud config file as shown below: - -```yaml -bundles: -- targets: - - run://quay.io/kairos/community-bundles:cert-manager_latest - - run://quay.io/kairos/community-bundles:kairos_latest -``` - -We also need to enable entangle by setting `kairos.entangle.enable: true`. - -Next, we generate a new token that we will use to connect to the cluster later. - -```bash -docker run -ti --rm quay.io/mudler/edgevpn -b -g -``` - -In order for `entangle` to use the token, we can define a `Entanglement` to expose ssh in the mesh network like the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: ssh-entanglement - namespace: kube-system -type: Opaque -stringData: - network_token: ___GENERATED TOKEN HERE___ ---- -apiVersion: entangle.kairos.io/v1alpha1 -kind: Entanglement -metadata: - name: ssh-entanglement - namespace: kube-system -spec: - serviceUUID: "ssh" - secretRef: "ssh-entanglement" - host: "127.0.0.1" - port: "22" - hostNetwork: true -``` - -{{% alert title="Note" color="warning" %}} - -If you have already a kubernetes cluster, you can install the [Entangle](/docs/reference/entangle) chart and just apply the manifest. - -{{% /alert %}} - -This entanglement will expose the port `22` in the node over the mesh network with the `ssh` service UUID so we can later connect to it. Replace `___GENERATED TOKEN HERE___` with the token you previously generated with the `docker` command (check out the [documentation](/docs/reference/entangle) for advanced usage). - -In order to deploy the `Entanglement` automatically, we can add it to the `k3s` manifests folder in the cloud config file: - -```yaml -write_files: -- path: /var/lib/rancher/k3s/server/manifests/expose-ssh.yaml - permissions: "0644" - owner: "root" - content: | - apiVersion: v1 - kind: Secret - metadata: - name: ssh-entanglement - namespace: kube-system - type: Opaque - stringData: - network_token: ___GENERATED TOKEN HERE___ - --- - apiVersion: entangle.kairos.io/v1alpha1 - kind: Entanglement - metadata: - name: ssh-entanglement - namespace: kube-system - spec: - serviceUUID: "ssh" - secretRef: "ssh-entanglement" - host: "127.0.0.1" - port: "22" - hostNetwork: true -``` - -Here's an example of a complete cloud configuration file which automatically install a Kairos node in the bigger disk, and exposes ssh with `entangle`: - -```yaml -#cloud-config - -install: - device: "auto" - auto: true - reboot: true - -hostname: debugging-station-{{ trunc 4 .MachineID }} - -users: -- name: kairos - passwd: kairos - ssh_authorized_keys: - - github:mudler - -k3s: - enabled: true - -# Specify the bundle to use -bundles: -- targets: - - run://quay.io/kairos/community-bundles:system-upgrade-controller_latest - - run://quay.io/kairos/community-bundles:cert-manager_latest - - run://quay.io/kairos/community-bundles:kairos_latest - -kairos: - entangle: - enable: true - -write_files: -- path: /var/lib/rancher/k3s/server/manifests/expose-ssh.yaml - permissions: "0644" - owner: "root" - content: | - apiVersion: v1 - kind: Secret - metadata: - name: ssh-entanglement - namespace: kube-system - type: Opaque - stringData: - network_token: ___GENERATED TOKEN HERE___ - --- - apiVersion: entangle.kairos.io/v1alpha1 - kind: Entanglement - metadata: - name: ssh-entanglement - namespace: kube-system - spec: - serviceUUID: "ssh" - secretRef: "ssh-entanglement" - host: "127.0.0.1" - port: "22" - hostNetwork: true -``` - -In this file, you can specify various settings for your debugging station. For example, the `hostname` field sets the name of the machine, and the `users` field creates a new user with the name "kairos" and a pre-defined password and SSH key. The `k3s` field enables the installation of the k3s Kubernetes distribution. - -## Deploy with AuroraBoot - -To automatically boot and install the debugging station, we can use [Auroraboot](/docs/reference/auroraboot). The following example shows how to use the cloud config above with it: - -```bash -cat <Oops! This page doesn't exist. Try going back to our home page.
- -You can learn how to make a 404 page like this in Custom 404 Pages.
-We call Kairos a meta-Linux Distribution. Why meta? Because it sits as a container layer, turning any Linux distro into an immutable system distributed via container registries. With Kairos, the OS is the container image, which is used for new installations and upgrades.
The Kairos 'factory' enables you to build custom bootable-OS images for your edge devices, from your choice of OS (including openSUSE, Alpine and Ubuntu), and your choice of edge Kubernetes distribution—Kairos is totally agnostic.
Each node boots from the same image, so no more snowflakes in your clusters, and each system is immutable—it boots in a restricted, permissionless mode, where certain paths are not writeable. For instance, after an installation it's not possible to install additional packages in the system, and any configuration change is discarded after a reboot. This dramatically reduces the attack surface and the impact of malicious actors gaining access to the device.
Keeping simplicity while providing complex solutions is a key factor of Kairos. Onboarding of nodes can be done via QR code, manually, remotely via SSH, interactively, or completely automated with Kubernetes, with zero touch provisioning.
Kairos optionally supports P2P full-mesh out of the box. New devices wake up with a shared secret and distributed ledger of other nodes and clusters to look for—they form a unified overlay network that’s E2E encrypted to discover other devices, even spanning multiple networks, to bootstrap the cluster.
Each Kairos OS is created as easily as writing a Dockerfile—no custom recipes or arcane languages here. You can run and customize the container images locally with Docker, Podman, or your container engine of choice exactly how you do for apps already.
Your built OS is a container-based, single image that is distributed via container registries, so it plugs neatly into your existing CI/CD pipelines. It makes edge scale as repeatable and portable as driving containers. Customizing, mirroring of images, scanning vulnerabilities, gating upgrades, patching CVEs are some of the endless possibilities. Updating nodes is just as easy as selecting a new version via Kubernetes. Each node will pull the update from your repo, installing on A/B partitions for zero-risk upgrades with failover.
Use Kubernetes management principles to manage and provision your clusters. Kairos supports automatic node provisioning via CRDs; upgrade management via Kubernetes; node repurposing and machine auto scaling capabilities; and complete configuration management via cloud-init.
Kairos draws on the strength of the cloud-native ecosystem, not just for principles and approaches, but components. Cluster API is optionally supported as well, and can be used to manage Kubernetes clusters using native Kubernetes APIs with zero touch provisioning.
We move fast, but we try not to break stuff—particularly your nodes. Every change in the Kairos codebase runs through highly engineered automated testing before release to catch bugs earlier.
While Kairos has been engineered for large-scale use by DevOps and IT Engineering teams working in cloud, bare metal, edge and embedded systems environments, we welcome makers, hobbyists, and anyone in the community to participate in driving forward our vision of the immutable, decentralized, and containerized edge.
Kairos is a vibrant, active project with time and financial backing from Spectro Cloud, a Kubernetes management platform provider with a strong commitment to the open source community. It is a silver member of the CNCF and LF Edge, a Certified Kubernetes Service Provider, and a contributor to projects such as Cluster API. Find more about Spectro Cloud here.