Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hybrid deployments #563

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,7 @@ Make sure to set/review the following vars:
| `lab_cloud` | the cloud within the lab environment for Red Hat Performance labs (Example: `cloud42`)
| `cluster_type` | either `mno`, or `sno` for the respective cluster layout
| `worker_node_count` | applies to mno cluster type for the desired worker count, ideal for leaving left over inventory hosts for other purposes
| `hybrid_worker_count` | applies to mno cluster type for the desired virtual worker count, HV nodes and VMs are required to be setup.
| `bastion_lab_interface` | set to the bastion machine's lab accessible interface
| `bastion_controlplane_interface` | set to the interface in which the bastion will be networked to the deployed ocp cluster
| `controlplane_lab_interface` | applies to mno cluster type and should map to the nodes interface in which the lab provides dhcp to and also required for public routable vlan based sno deployment(to disable this interface)
Expand Down
10 changes: 5 additions & 5 deletions ansible/inventory/inventory-bm-byol.sample
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ bmc_user=redhat
bmc_password=password

[controlplane]
control-plane-0 bmc_address=172.29.170.70 network_mac=40:a6:b7:83:98:b0 lab_mac=ec:2a:72:33:15:f0 ip=198.18.10.5 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:01:00.0-nvme-1
control-plane-1 bmc_address=172.29.170.71 network_mac=40:a6:b7:83:98:c0 lab_mac=ec:2a:72:33:16:c8 ip=198.18.10.6 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:3f:00.0-scsi-0:0:1:0
control-plane-2 bmc_address=172.29.170.193 network_mac=b4:83:51:07:45:28 lab_mac=ec:2a:72:51:0a:f4 ip=198.18.10.7 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:4a:00.0-scsi-0:3:111:0
control-plane-0 bmc_address=172.29.170.70 mac_address=40:a6:b7:83:98:b0 lab_mac=ec:2a:72:33:15:f0 ip=198.18.10.5 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:01:00.0-nvme-1
control-plane-1 bmc_address=172.29.170.71 mac_address=40:a6:b7:83:98:c0 lab_mac=ec:2a:72:33:16:c8 ip=198.18.10.6 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:3f:00.0-scsi-0:0:1:0
control-plane-2 bmc_address=172.29.170.193 mac_address=b4:83:51:07:45:28 lab_mac=ec:2a:72:51:0a:f4 ip=198.18.10.7 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:4a:00.0-scsi-0:3:111:0

[controlplane:vars]
role=master
Expand All @@ -28,8 +28,8 @@ dns1=198.18.10.1
dns2=172.29.202.10

[worker]
worker-0 bmc_address=172.29.170.219 network_mac=b4:96:91:f2:d3:90 lab_mac=ec:2a:72:33:17:2a ip=198.18.10.8 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:01:00.0-nvme-1
worker-1 bmc_address=172.29.170.73 network_mac=00:62:0b:2f:2b:50 lab_mac=c8:4b:d6:8d:78:2e ip=198.18.10.9 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:3e:00.0-nvme-1
worker-0 bmc_address=172.29.170.219 mac_address=b4:96:91:f2:d3:90 lab_mac=ec:2a:72:33:17:2a ip=198.18.10.8 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:01:00.0-nvme-1
worker-1 bmc_address=172.29.170.73 mac_address=00:62:0b:2f:2b:50 lab_mac=c8:4b:d6:8d:78:2e ip=198.18.10.9 vendor=Dell install_disk=/dev/disk/by-path/pci-0000:3e:00.0-nvme-1

[worker:vars]
role=worker
Expand Down
10 changes: 5 additions & 5 deletions ansible/inventory/inventory-bm.sample
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ bmc_user=quads
bmc_password=password

[controlplane]
control-plane-0 bmc_address=mgmt-control-plane-0.example.com network_mac=40:a6:b7:2a:74:40 lab_mac=bc:97:e1:7a:d4:40 ip=198.18.10.5 vendor=Dell install_disk=/dev/sda
control-plane-1 bmc_address=mgmt-control-plane-1.example.com network_mac=40:a6:b7:2a:98:90 lab_mac=bc:97:e1:78:c7:f0 ip=198.18.10.6 vendor=Dell install_disk=/dev/sda
control-plane-2 bmc_address=mgmt-control-plane-2.example.com network_mac=40:a6:b7:2a:6b:f0 lab_mac=bc:97:e1:7a:ce:70 ip=198.18.10.7 vendor=Dell install_disk=/dev/sda
control-plane-0 bmc_address=mgmt-control-plane-0.example.com mac_address=40:a6:b7:2a:74:40 lab_mac=bc:97:e1:7a:d4:40 ip=198.18.10.5 vendor=Dell install_disk=/dev/sda
control-plane-1 bmc_address=mgmt-control-plane-1.example.com mac_address=40:a6:b7:2a:98:90 lab_mac=bc:97:e1:78:c7:f0 ip=198.18.10.6 vendor=Dell install_disk=/dev/sda
control-plane-2 bmc_address=mgmt-control-plane-2.example.com mac_address=40:a6:b7:2a:6b:f0 lab_mac=bc:97:e1:7a:ce:70 ip=198.18.10.7 vendor=Dell install_disk=/dev/sda

[controlplane:vars]
role=master
Expand All @@ -27,8 +27,8 @@ dns1=198.18.10.1
dns2=10.1.36.1

[worker]
worker-0 bmc_address=mgmt-worker-0.example.com network_mac=40:a6:b7:2a:75:f1 lab_mac=bc:97:e1:7a:ce:71 ip=198.18.10.8 vendor=Dell install_disk=/dev/sda
worker-1 bmc_address=mgmt-worker-1.example.com network_mac=40:a6:b7:2b:bc:01 lab_mac=bc:97:e1:7a:ce:72 ip=198.18.10.9 vendor=Dell install_disk=/dev/sda
worker-0 bmc_address=mgmt-worker-0.example.com mac_address=40:a6:b7:2a:75:f1 lab_mac=bc:97:e1:7a:ce:71 ip=198.18.10.8 vendor=Dell install_disk=/dev/sda
worker-1 bmc_address=mgmt-worker-1.example.com mac_address=40:a6:b7:2b:bc:01 lab_mac=bc:97:e1:7a:ce:72 ip=198.18.10.9 vendor=Dell install_disk=/dev/sda

[worker:vars]
role=worker
Expand Down
4 changes: 2 additions & 2 deletions ansible/inventory/inventory-sno.sample
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ bmc_password=password

[sno]
# Single Node OpenShift Clusters
sno-0 bmc_address=mgmt-sno-0.example.com boot_iso=sno-0.iso ip=10.0.0.1 vendor=Dell lab_mac=00:4e:01:3d:e6:9e network_mac=40:a6:b7:00:63:61 install_disk=/dev/sda
sno-1 bmc_address=mgmt-sno-1.example.com boot_iso=sno-1.iso ip=10.0.0.2 vendor=Dell lab_mac=00:4e:01:3d:e6:ab network_mac=40:a6:b7:00:53:81 install_disk=/dev/sda
sno-0 bmc_address=mgmt-sno-0.example.com boot_iso=sno-0.iso ip=10.0.0.1 vendor=Dell lab_mac=00:4e:01:3d:e6:9e mac_address=40:a6:b7:00:63:61 install_disk=/dev/sda
sno-1 bmc_address=mgmt-sno-1.example.com boot_iso=sno-1.iso ip=10.0.0.2 vendor=Dell lab_mac=00:4e:01:3d:e6:ab mac_address=40:a6:b7:00:53:81 install_disk=/dev/sda

[sno:vars]
bmc_user=quads
Expand Down
4 changes: 4 additions & 0 deletions ansible/mno-deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,10 @@
vars:
inventory_group: worker
index: "{{ worker_node_count }}"
- role: boot-iso
vars:
inventory_group: hv_vm
index: "{{ hybrid_worker_count }}"
- wait-hosts-discovered
- configure-local-storage
- install-cluster
Expand Down
90 changes: 90 additions & 0 deletions ansible/roles/boot-iso/tasks/libvirt.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
---
# Libvirt tasks for booting an iso
# Couldn't use ansible redfish_command it requires username and password to be used.
# URLs modeled from http://docs.openstack.org/sushy-tools/latest/user/dynamic-emulator.html

- name: Libvirt - Power down machine prior to booting iso
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Systems/{{ hostvars[item]['domain_uuid'] }}/Actions/ComputerSystem.Reset"
method: POST
headers:
content-type: application/json
Accept: application/json
body: {"ResetType":"ForceOff"}
body_format: json
validate_certs: no
status_code: 204
return_content: yes
register: redfish_forceoff

- name: Libvirt - Pause for power down
pause:
seconds: 1
when: not redfish_forceoff.failed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be able to use a "check for powered down" type of task here instead of a sleep.


- name: Libvirt - Set OneTimeBoot VirtualCD
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Systems/{{ hostvars[item]['domain_uuid'] }}"
method: PATCH
headers:
content-type: application/json
Accept: application/json
body: { "Boot": { "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Continuous" } }
body_format: json
validate_certs: no
status_code: 204
return_content: yes

- name: Libvirt - Check for Virtual Media
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Managers/{{ hostvars[item]['domain_uuid'] }}/VirtualMedia/Cd"
method: Get
headers:
content-type: application/json
Accept: application/json
body: {}
body_format: json
validate_certs: no
status_code: 200
return_content: yes
register: check_virtual_media

- name: Libvirt - Eject any CD Virtual Media
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Managers/{{ hostvars[item]['domain_uuid'] }}/VirtualMedia/Cd/Actions/VirtualMedia.EjectMedia"
method: POST
headers:
content-type: application/json
Accept: application/json
body: {}
body_format: json
validate_certs: no
status_code: 204
return_content: yes
when: check_virtual_media.json.Image

- name: Libvirt - Insert virtual media
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Managers/{{ hostvars[item]['domain_uuid'] }}/VirtualMedia/Cd/Actions/VirtualMedia.InsertMedia"
method: POST
headers:
content-type: application/json
Accept: application/json
body: {"Image":"http://{{ http_store_host }}:{{ http_store_port }}/{{ hostvars[item]['boot_iso'] }}", "Inserted": true}
body_format: json
validate_certs: no
status_code: 204
return_content: yes

- name: Libvirt - Power on
uri:
url: "http://{{ hostvars[item]['ansible_host'] }}:9000/redfish/v1/Systems/{{ hostvars[item]['domain_uuid'] }}/Actions/ComputerSystem.Reset"
method: POST
headers:
content-type: application/json
Accept: application/json
body: {"ResetType":"On"}
body_format: json
validate_certs: no
status_code: 204
return_content: yes
6 changes: 6 additions & 0 deletions ansible/roles/boot-iso/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,9 @@
with_items:
- "{{ groups[inventory_group][:index|int] }}"
when: hostvars[item]['vendor'] == 'Lenovo'

- name: Boot iso on libvirt vm
include_tasks: libvirt.yml
with_items:
- "{{ groups[inventory_group][hybrid_worker_offset:index|int] }}"
when: hostvars[item]['vendor'] == 'Libvirt'
7 changes: 7 additions & 0 deletions ansible/roles/create-ai-cluster/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,13 @@
- cluster_type == "mno"
loop: "{{ groups['worker'] }}"

- name: MNO / Hybrid (VM Workers) - Populate static network configuration with VM worker nodes
include_tasks: static_network_config.yml
when:
- cluster_type == "mno"
- hybrid_worker_count > 0
loop: "{{ groups['hv_vm'][hybrid_worker_offset:hybrid_worker_offset+hybrid_worker_count] }}"

# - debug:
# msg: "{{ static_network_config }}"

Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
[
{
"mac_address": "{{ hostvars[item]['network_mac'] }}",
"mac_address": "{{ hostvars[item]['mac_address'] }}",
"logical_nic_name": "{{ hostvars[item]['network_interface'] }}"
{% if 'lab_mac' in hostvars[item] %}
},
{
"mac_address": "{{ hostvars[item]['lab_mac'] }}",
"logical_nic_name": "{{ hostvars[item]['lab_interface'] }}"
{% endif %}
}
]
6 changes: 3 additions & 3 deletions ansible/roles/create-inventory/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,9 @@
controlplane0: "{{ ocpinventory.json.nodes[1].pm_addr }}"
controlplane1: "{{ ocpinventory.json.nodes[2].pm_addr }}"
controlplane2: "{{ ocpinventory.json.nodes[3].pm_addr }}"
controlplane0_network_mac: "{{ ocpinventory.json.nodes[1].mac[controlplane_network_interface_idx | int] }}"
controlplane1_network_mac: "{{ ocpinventory.json.nodes[2].mac[controlplane_network_interface_idx | int] }}"
controlplane2_network_mac: "{{ ocpinventory.json.nodes[3].mac[controlplane_network_interface_idx | int] }}"
controlplane0_mac_address: "{{ ocpinventory.json.nodes[1].mac[controlplane_network_interface_idx | int] }}"
controlplane1_mac_address: "{{ ocpinventory.json.nodes[2].mac[controlplane_network_interface_idx | int] }}"
controlplane2_mac_address: "{{ ocpinventory.json.nodes[3].mac[controlplane_network_interface_idx | int] }}"
controlplane0_vendor: "{{ hw_vendor[(ocpinventory.json.nodes[1].pm_addr.split('.')[0]).split('-')[-1]] }}"
controlplane1_vendor: "{{ hw_vendor[(ocpinventory.json.nodes[2].pm_addr.split('.')[0]).split('-')[-1]] }}"
controlplane2_vendor: "{{ hw_vendor[(ocpinventory.json.nodes[3].pm_addr.split('.')[0]).split('-')[-1]] }}"
Expand Down
22 changes: 16 additions & 6 deletions ansible/roles/create-inventory/templates/inventory-mno.j2
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ bmc_user={{ bmc_user }}
bmc_password={{ bmc_password }}

[controlplane]
{{ controlplane0.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane0 }} network_mac={{ controlplane0_network_mac }} lab_mac={{ controlplane0_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(5) }} vendor={{ controlplane0_vendor }} install_disk={{ control_plane_install_disk }}
{{ controlplane1.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane1 }} network_mac={{ controlplane1_network_mac }} lab_mac={{ controlplane1_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(6) }} vendor={{ controlplane1_vendor }} install_disk={{ control_plane_install_disk }}
{{ controlplane2.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane2 }} network_mac={{ controlplane2_network_mac }} lab_mac={{ controlplane2_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(7) }} vendor={{ controlplane2_vendor }} install_disk={{ control_plane_install_disk }}
{{ controlplane0.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane0 }} mac_address={{ controlplane0_mac_address }} lab_mac={{ controlplane0_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(5) }} vendor={{ controlplane0_vendor }} install_disk={{ control_plane_install_disk }}
{{ controlplane1.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane1 }} mac_address={{ controlplane1_mac_address }} lab_mac={{ controlplane1_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(6) }} vendor={{ controlplane1_vendor }} install_disk={{ control_plane_install_disk }}
{{ controlplane2.split('.')[0] | replace('mgmt-','') }} bmc_address={{ controlplane2 }} mac_address={{ controlplane2_mac_address }} lab_mac={{ controlplane2_lab_mac }} ip={{ controlplane_network | ansible.utils.nthhost(7) }} vendor={{ controlplane2_vendor }} install_disk={{ control_plane_install_disk }}

[controlplane:vars]
role=master
Expand All @@ -42,7 +42,7 @@ dns2={{ labs[lab]['dns'][1] | default('') }}

[worker]
{% for worker in ocpinventory_worker_nodes %}
{{ worker.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ worker.pm_addr }} network_mac={{ worker.mac[controlplane_network_interface_idx|int] }} lab_mac={{ ( (mno_foreman_data.results| selectattr('json.name', 'eq', worker.pm_addr | replace('mgmt-',''))|first).json.interfaces | selectattr('primary', 'eq', True)|first).mac }} ip={{ controlplane_network | ansible.utils.nthhost(loop.index + mno_worker_node_offset) }} vendor={{ hw_vendor[(worker.pm_addr.split('.')[0]).split('-')[-1]] }} install_disk={{ worker_install_disk }}
{{ worker.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ worker.pm_addr }} mac_address={{ worker.mac[controlplane_network_interface_idx|int] }} lab_mac={{ ( (mno_foreman_data.results| selectattr('json.name', 'eq', worker.pm_addr | replace('mgmt-',''))|first).json.interfaces | selectattr('primary', 'eq', True)|first).mac }} ip={{ controlplane_network | ansible.utils.nthhost(loop.index + mno_worker_node_offset) }} vendor={{ hw_vendor[(worker.pm_addr.split('.')[0]).split('-')[-1]] }} install_disk={{ worker_install_disk }}
{% endfor %}

[worker:vars]
Expand Down Expand Up @@ -85,12 +85,12 @@ network_prefix={{ controlplane_network_prefix }}
{% for hv in ocpinventory_hv_nodes %}
{% set hv_loop = loop %}
{% for vm in range(hw_vm_counts[lab][(hv.pm_addr.split('.')[0]).split('-')[-1]]['default']) %}
{{ hv_vm_prefix }}{{ '%05d' % ctr.vm }} ansible_host={{ hv.pm_addr | replace('mgmt-','') }} hv_ip={{ controlplane_network | ansible.utils.nthhost(hv_loop.index + ocpinventory_worker_nodes|length + mno_worker_node_offset + hv_ip_offset) }} ip={{ controlplane_network | ansible.utils.nthhost(hv_vm_ip_offset + ctr.vm - 1) }} cpus={{ hv_vm_cpu_count }} memory={{ hv_vm_memory_size }} disk_size={{ hv_vm_disk_size }} vnc_port={{ 5900 + loop.index }} mac_address={{ (90520730730496 + ctr.vm) | ansible.utils.hwaddr('linux') }} domain_uuid={{ ctr.vm | to_uuid }} disk_location=/var/lib/libvirt/images bw_avg={{ hv_vm_bandwidth_average }} bw_peak={{ hv_vm_bandwidth_peak }} bw_burst={{ hv_vm_bandwidth_burst }}
{{ hv_vm_prefix }}{{ '%05d' % ctr.vm }} ansible_host={{ hv.pm_addr | replace('mgmt-','') }} hv_ip={{ controlplane_network | ansible.utils.nthhost(hv_loop.index + ocpinventory_worker_nodes|length + mno_worker_node_offset + hv_ip_offset) }} ip={{ controlplane_network | ansible.utils.nthhost(hv_vm_ip_offset + ctr.vm - 1) }} cpus={{ hv_vm_cpu_count }} memory={{ hv_vm_memory_size }} disk_size={{ hv_vm_disk_size }} vnc_port={{ 5900 + loop.index }} mac_address={{ (90520730730496 + ctr.vm) | ansible.utils.hwaddr('linux') }} domain_uuid={{ ctr.vm | to_uuid }} disk_location=/var/lib/libvirt/images bw_avg={{ hv_vm_bandwidth_average }} bw_peak={{ hv_vm_bandwidth_peak }} bw_burst={{ hv_vm_bandwidth_burst }} vendor=Libvirt install_disk=/dev/sda
{% set ctr.vm = ctr.vm + 1 %}
{% endfor %}
{% if hv.disk2_enable %}
{% for vm in range(hw_vm_counts[lab][(hv.pm_addr.split('.')[0]).split('-')[-1]][hv.disk2_device]) %}
{{ hv_vm_prefix }}{{ '%05d' % ctr.vm }} ansible_host={{ hv.pm_addr | replace('mgmt-','') }} hv_ip={{ controlplane_network | ansible.utils.nthhost(hv_loop.index + ocpinventory_worker_nodes|length + mno_worker_node_offset + hv_ip_offset) }} ip={{ controlplane_network | ansible.utils.nthhost(hv_vm_ip_offset + ctr.vm - 1) }} cpus={{ hv_vm_cpu_count }} memory={{ hv_vm_memory_size }} disk_size={{ hv_vm_disk_size }} vnc_port={{ 5900 + loop.index + hw_vm_counts[lab][(hv.pm_addr.split('.')[0]).split('-')[-1]]['default'] }} mac_address={{ (90520730730496 + ctr.vm) | ansible.utils.hwaddr('linux') }} domain_uuid={{ ctr.vm | to_uuid }} disk_location={{ disk2_mount_path }}/libvirt/images bw_avg={{ hv_vm_bandwidth_average }} bw_peak={{ hv_vm_bandwidth_peak }} bw_burst={{ hv_vm_bandwidth_burst }}
{{ hv_vm_prefix }}{{ '%05d' % ctr.vm }} ansible_host={{ hv.pm_addr | replace('mgmt-','') }} hv_ip={{ controlplane_network | ansible.utils.nthhost(hv_loop.index + ocpinventory_worker_nodes|length + mno_worker_node_offset + hv_ip_offset) }} ip={{ controlplane_network | ansible.utils.nthhost(hv_vm_ip_offset + ctr.vm - 1) }} cpus={{ hv_vm_cpu_count }} memory={{ hv_vm_memory_size }} disk_size={{ hv_vm_disk_size }} vnc_port={{ 5900 + loop.index + hw_vm_counts[lab][(hv.pm_addr.split('.')[0]).split('-')[-1]]['default'] }} mac_address={{ (90520730730496 + ctr.vm) | ansible.utils.hwaddr('linux') }} domain_uuid={{ ctr.vm | to_uuid }} disk_location={{ disk2_mount_path }}/libvirt/images bw_avg={{ hv_vm_bandwidth_average }} bw_peak={{ hv_vm_bandwidth_peak }} bw_burst={{ hv_vm_bandwidth_burst }} vendor=Libvirt install_disk=/dev/sda
{% set ctr.vm = ctr.vm + 1 %}
{% endfor %}
{% endif %}
Expand All @@ -105,6 +105,16 @@ machine_network={{ controlplane_network }}
network_prefix={{ controlplane_network_prefix }}
gateway={{ controlplane_network_gateway }}
bw_limit={{ hv_vm_bandwidth_limit }}

boot_iso=discovery.iso
lab_interface={{ controlplane_lab_interface }}
network_interface={{ controlplane_network_interface }}
{% if controlplane_bastion_as_dns %}
dns1={{ bastion_controlplane_ip }}
{% else %}
dns1={{ labs[lab]['dns'][0] }}
dns2={{ labs[lab]['dns'][1] | default('') }}
{% endif %}
{% else %}
[hv]
# Set `hv_inventory: true` to populate
Expand Down
2 changes: 1 addition & 1 deletion ansible/roles/create-inventory/templates/inventory-sno.j2
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ bmc_password={{ bmc_password }}
{%- else -%}
{%- set ip=(sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first).json.ip -%}
{%- endif -%}
{% if not loop.first %}# {% endif %}{{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ sno.pm_addr }} boot_iso={{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }}.iso ip={{ ip }} vendor={{ hw_vendor[(sno.pm_addr.split('.')[0]).split('-')[-1]] }} lab_mac={{ (sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first) | json_query(mac_query) | join(', ') }} network_mac={{ sno.mac[controlplane_network_interface_idx] }} install_disk={{ sno_install_disk }}
{% if not loop.first %}# {% endif %}{{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ sno.pm_addr }} boot_iso={{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }}.iso ip={{ ip }} vendor={{ hw_vendor[(sno.pm_addr.split('.')[0]).split('-')[-1]] }} lab_mac={{ (sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first) | json_query(mac_query) | join(', ') }} mac_address={{ sno.mac[controlplane_network_interface_idx] }} install_disk={{ sno_install_disk }}
{% endfor %}

[sno:vars]
Expand Down
2 changes: 1 addition & 1 deletion ansible/roles/mno-post-cluster-install/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@
- name: Label the worker nodes
shell: |
KUBECONFIG={{ bastion_cluster_config_dir }}/kubeconfig oc label no --overwrite {{ item }} localstorage=true prometheus=true
with_items: "{{ groups['worker'] }}"
with_items: "{{ groups['worker'] + groups['hv_vm'][hybrid_worker_offset:hybrid_worker_offset+hybrid_worker_count] }}"

- name: Install local-storage operator
shell:
Expand Down
Loading