- Table of Contents
- Basic Usage
- Defining Your Cluster
- Examples
There are two methods for consuming this repository, one is to simply clone the repository and edit it as necessary, the other is to import it as a collection, both options are detailed below.
Note
If you are looking for airgap or tarball installation instructions, please go here
The simplest method for using this repository (as detailed in the main README.md) is to simply clone the repository and copy the sample inventory.
The second method for using this project is to import it as a collection in your own requirements.yml
as this repository does contain a galaxy.yml
. To import it add the following to your galaxy.yml
:
collections:
- name: rancherfederal.rke2-ansible
source: [email protected]:rancherfederal/rke2-ansible.git
type: git
version: main
Then you can call the RKE2 role in a play like so:
---
- name: RKE2 play
hosts: all
any_errors_fatal: True
roles:
- role: rancherfederal.rke2_ansible.rke2
This repository is not intended to be opinionated and as a result it is important you to have read and understand the RKE2 docs before moving forward, this documentation is not intended to be an exhaustive explanation of all possible RKE2 configuration options, it is up to the end user to ensure their options are valid.
The most basic inventory file contains nothing more than your hosts, see below:
---
rke2_cluster:
children:
rke2_servers:
hosts:
server0.example.com:
rke2_agents:
hosts:
agent0.example.com:
This is the simplest possible inventory file and will deploy the latest available version of RKE2 with only default settings.
Configurations and variables can become lengthy and unwieldy, as a general note of advice it is best to move variables into a group_vars
folder.
./inventory
├── Cluser_A
│ ├── group_vars
│ │ ├── all.yml
│ │ ├── rke2_agents.yml
│ │ └── rke2_servers.yml
│ └── hosts.yml
└── Cluser_B
├── group_vars
│ ├── all.yml
│ ├── rke2_agents.yml
│ └── rke2_servers.yml
└── hosts.yml
5 directories, 8 files
Enabling SELinux in the playbook requires selinux: true
be set in either the cluster, group, or host level config profiles (Please see Special Variables for more info). Though generally this should be set at the cluster and can be done like so:
hosts.yml:
---
all:
vars:
cluster_rke2_config:
selinux: true
For more information please see the RKE2 documentation, here.
Enabling the CIS tasks in the playbook requires a CIS profile be added to the ansible variables file. This can be placed in either the cluster, or group level config profiles (Please see Special Variables for more info). Below is an example, in the example the CIS profile is set at the group level, this ensures all server nodes run the CIS hardening profile tasks.
hosts.yml:
rke2_cluster:
children:
rke2_servers:
vars:
group_rke2_config:
profile: cis
For more information please see the RKE2 documentation, here.
In general this repository has attempted to move away from special or "magic" variables, however some are unavoidable, the (non-exhaustive) list of variables is below:
all.vars.rke2_install_version
: This defines what version of RKE2 to installrke2_cluster.children.rke2_servers.vars.hosts.<host>.node_labels
: Defines a list of node labels for a specific server noderke2_cluster.children.rke2_agents.vars.hosts.<host>.node_labels
: Defines a list of node labels for a specific agent node
There are three levels an RKE2 config variables can be placed in, that is cluster_rke2_config
, group_rke2_config
, and host_rke2_config
.
all.vars.cluster_rke2_config
: Defines common RKE2 config options for the whole clusterrke2_cluster.children.rke2_servers.vars.group_rke2_config
: Defines common RKE2 config options for therke2_servers
grouprke2_cluster.children.rke2_agents.vars.group_rke2_config
: Defines common RKE2 config options for therke2_agents
grouprke2_cluster.children.rke2_servers.vars.hosts.<host>.host_rke2_config
: Defines RKE2 config options for a specific server noderke2_cluster.children.rke2_agents.vars.hosts.<host>.host_rke2_config
: Defines RKE2 config options for a specific agent node
Note
Through the rest of these docs you may see references to rke2_servers.yml
, this is the group vars file for rke2_servers. This is functionally equivalent to rke2_cluster.children.rke2_servers.vars
. References to rke2_agents.yml
is functionally equivalent to rke2_cluster.children.rke2_agents.vars
It is important to understand these variables here are not special in the sense that they enable or disable certain functions in the RKE2 role, with one notable exception being the profile
key. These variables are special in the sense that they will be condensed into a single config file on each node. Each node will end up with a merged config file comprised of cluster_rke2_config
, group_rke2_config
, and host_rke2_config
.
A version of RKE2 can be selected to be installed via the all.vars.rke2_install_version
variable, please see the RKE2 repository for available releases.
group_vars/all.yml:
---
all:
vars:
rke2_install_version: v1.29.12+rke2r1
In order to define a PSA (Pod Security Admission) config, server nodes will need to have the rke2_pod_security_admission_config_file_path
variable defined, then the pod-security-admission-config-file
will need to be defined in the rke2_config variable at the relevant level (please see RKE Config Variables).
Below is an example of how this can be defined at the server group level (rke2_cluster.children.rke2_servers.vars
):
group_vars/rke2_servers.yml:
---
rke2_pod_security_admission_config_file_path: "{{ playbook_dir }}/docs/advanced_sample_inventory/files/pod-security-admission-config.yaml"
group_rke2_config:
pod-security-admission-config-file: /etc/rancher/rke2/pod-security-admission-config.yaml
In order to define a audit policy config, server nodes will need to have the rke2_audit_policy_config_file_path
variable defined, then the audit-policy-file
will need to be defined in the rke2_config variable at the relevant level (please see RKE Config Variables).
Below is an example of how this can be defined at the server group level (rke2_cluster.children.rke2_servers.vars
):
group_vars/rke2_servers.yml:
rke2_audit_policy_config_file_path: "{{ playbook_dir }}/docs/advanced_sample_inventory/files/audit-policy.yaml"
group_rke2_config:
audit-policy-file: /etc/rancher/rke2/audit-policy.yaml
kube-apiserver-arg:
- audit-policy-file=/etc/rancher/rke2/audit-policy.yaml
- audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log
If you have a cluster that needs extra manifests to be deployed or the cluster needs a critical component to be configured RKE2's "HelmChartConfig" is an available option (among others). The Ansible repository supports the use of these configuration files. Simply place the Helm chart configs in a folder, give Ansible the path to the folder, and Ansible will enumerate the files and place them on the first server node.
There are two variables that control the deployment of manifests to the server nodes:
rke2_manifest_config_directory
rke2_manifest_config_post_run_directory
The first variable is used to deploy manifest to the server nodes before starting the RKE2 server process, this ensures critical components (like the CNI) can be configured when the RKE2 server process starts. The second, ensures applications are deployed after the RKE2 server process starts. There are examples of both below.
The example used is configuring Cilium with the kube-proxy replacement enabled (a fairly common use case):
Warning
If this option is used you must provide a become
password and this must be the password for the local host running the Ansible playbook. The playbook is looking for this directory on the localhost, and will run as root. This imposes some limitations, if you are using an SSH password to login to remote systems (typical for STIG'd clusters) the become
password must be the same for the cluster nodes AND localhost.
group_vars/rke2_servers.yml:
For this example to work kube-proxy needs to be disabled, and the Cilium CNI needs to be enabled.
rke2_manifest_config_directory: "{{ playbook_dir }}/docs/advanced_sample_inventory/pre-deploy-manifests/"
group_rke2_config:
# Use Cilium as the CNI
cni:
- cilium
# Cilium will replace this
disable-kube-proxy: true
cilium.yaml:
This file should be placed in the directory you intend to upload to the server node, in the example above that is {{ playbook_dir }}/docs/advanced_sample_inventory/pre-deploy-manifests/
.
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
kubeProxyReplacement: true
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
bpf:
masquerade: true
preallocateMaps: true
tproxy: true
bpfClockProbe: true
In the example below cert-manager is auto deployed after the RKE2 server process is started. group_vars/rke2_servers.yml:
rke2_manifest_config_post_run_directory: "{{ playbook_dir }}/docs/advanced_sample_inventory/post-deploy-manifests/"
This file should be placed in the directory you intend to upload to the server node, in the example above that is {{ playbook_dir }}/docs/advanced_sample_inventory/pre-deploy-manifests/
.
cert-manager.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: jetstack
namespace: kube-system
spec:
repo: https://charts.jetstack.io
chart: cert-manager
version: v1.16.2
targetNamespace: cert-manager
createNamespace: true
valuesContent: |-
crds:
enabled: true
There are two examples provided in this folder, basic_sample_inventory
, and advanced_sample_inventory
. The basic example is the simplest possible example, the advanced example is all of the options explained above in one example.