forked from apathros/edge-conductor
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathcapi_byoh.yml
96 lines (90 loc) · 3.16 KB
/
capi_byoh.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
## This is the official Kit for ClusterAPI BYOH provider with ESP profile.
##
## Preconditions:
## - Users need to setup the ESP network topologic (following the settings in config/extensions/esp_network.yml), and connect all the nodes to be installed to the ESP network.
## - Before running the "init" command, users need to:
## - Update ESP config file defined in "OS - Config" section, with correct "git_username" and "git_token" to access the profile git repo.
## - Input the MAC addresses and static IP addresses of the nodes in the "Parameters - nodes" config section.
## - Input the default password of the nodes in the "Parameters - nodes" config section.
## - Input the default SSH public key path. After ESP provisioning, allows you to connect to target nodes without password.
## - After OS provisioning finished with ESP, and before "cluster deploy", login to the nodes and make sure users have permission to run "sudo" command with no password.
##
## Features:
## - The ESP resource files will be installed from upstream with an external network connection.
## - The ClusterAPI BYOH provider will be used to do the cluster deployment.
## - The default container runtime used in the target cluster is containerd.
## - Offline deployment is not supported yet.
Use:
- kit/capi-platform.yml
- kit/common.yml
Parameters:
customconfig:
registry:
password: ""
## Input ssh public key path into the blank.
## Example: /home/path/.ssh/id_rsa.pub
## default_ssh_key_path: /home/path/.ssh/id_rsa.pub
default_ssh_key_path:
## Input http proxy and ESP will use this parameter to provision on target node
## Example: http_proxy: "http://www.example.com"
## After ESP provision, the http_proxy is already set on target node.
## So do https_proxy and no_proxy
global_settings:
provider_ip:
http_proxy: ""
https_proxy: ""
no_proxy: ""
dns_server: ""
ntp_server: ""
nodes:
- name: node-0
user: sys-admin
mac: "<mac_addr_0>"
ip:
ssh_passwd: ""
role:
- controlplane
- etcd
- name: node-1
user: sys-admin
mac: "<mac_addr_1>"
ip:
ssh_passwd: ""
role:
- worker
extensions:
- capi-byoh
- esp_network
- sriov
- service-tls
OS:
manifests:
- "config/manifests/os_provider_manifest.yml"
provider: esp
# Before running "init" with this Kit config file, please update ESP config
# with correct "git_username" and "git_token" to access the profile git repo.
config: "config/os-provider/esp_config_profile-ubuntu-20.04.yml"
# EC supports many distro for ESP. Currently, the distro it can use are
# "ubuntu2004"
# "ubuntu2204"
# "debian11"
distro: "ubuntu2004"
Cluster:
manifests:
- "config/manifests/cluster_provider_manifest.yml"
provider: capi
config: "config/cluster-provider/capi_cluster.yml"
Components:
manifests:
- "config/manifests/component_manifest.yml"
selector:
- name: nfd
- name: nginx-ingress
override:
chartoverride: file://{{ .Workspace }}/config/service-overrides/ingress/capi-nginx-ingress.yml
- name: intel-sriov-network
- name: rook-ceph
- name: rook-ceph-cluster
- name: portainer-ce
- name: intel-gpu-plugin
- name: akri