Skip to content

Network configuration

Yolanda Robla Mota edited this page Feb 4, 2021 · 1 revision

These playbooks are installing OpenShift based on Assisted Service tool. It requires to configure some specific settings for your network, so you may need to add proper configuration into your gateway.

Control plane

If you are deploying the control plane with ztp-cluster-deploy tool, you will need to have a running DHCP system in your network. The control plane will be provisioned using virtual machines, and their network will be configured using bridged networking. So the first step is to create a bridge into your provisioner host on the interface where you want your virtual machines to be plugged. A sample interface/bridge configuration can look like:

[root@ztphost ztp]# cat /etc/sysconfig/network-scripts/ifcfg-<name_of_interface> TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes NAME=<name_of_interface> UUID=<sample_uuid> DEVICE=<name_of_interface> ONBOOT=yes BRIDGE=ztp-br

[root@ztphost ztp]# cat /etc/sysconfig/network-scripts/ifcfg-ztp-br TYPE=Bridge PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no NAME=ztp-br DEVICE=ztp-br ONBOOT=yes IPADDR=<bridge_ip> NETMASK=255.255.255.0 GATEWAY=<interface_gateway>

This interface should have DHCP and DNS capabilities and should have external network access. Several DNS entries need to be added to the gateway configuration to recognize the cluster names.
There needs to be an static mapping on the DHCP server, to match the mac addresses of the virtual machines to master-*.<cluster_name>.<cluster_domain> names

DNS entries

Following entries need to be added to the DNS server of the used interface:
api.<cluster_name>.<cluster_domain> -> needs to point to API VIP defined on the inventory
*.apps.<cluster_name>.<cluster_domain> -> needs to point to INGRESS VIP defined on the inventory

The system should also be capable of resolving all the hostname entries for <node_name>.<cluster_name>.<cluster_domain>

Clone this wiki locally