Skip to content

Commit 0c79371

Browse files
committed
Apply required ospdo changes to adoption docs
Jira: https://issues.redhat.com/browse/OSPRH-14644
1 parent 588171f commit 0c79371

17 files changed

+228
-34
lines changed

docs_user/adoption-attributes.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ ifeval::["{build}" == "downstream"]
149149
:telemetry: Telemetry service
150150
endif::[]
151151

152-
ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
152+
ifeval::["{build_variant}" == "ospdo"]
153153
:OpenStackPreviousInstaller: director Operator
154154
endif::[]
155155

docs_user/assemblies/assembly_adopting-the-data-plane.adoc

+2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ ifdef::context[:parent-context: {context}]
88

99
Adopting the {rhos_long} data plane involves the following steps:
1010

11+
ifeval::["{build_variant}" != "ospdo"]
1112
. Stop any remaining services on the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} control plane.
13+
endif::[]
1214
. Deploy the required custom resources.
1315
. Perform a fast-forward upgrade on Compute services from {OpenStackShort} {rhos_prev_ver} to {rhos_acro} {rhos_curr_ver}.
1416
. Adopt Networker services to the {rhos_acro} data plane.

docs_user/modules/con_adoption-limitations.adoc

-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ The following {compute_service_first_ref} features are Technology Previews:
2929
* AMD SEV
3030
* Direct download from Rados Block Device (RBD)
3131
* File-backed memory
32-
* `Provider.yaml`
3332

3433
.Unsupported features
3534

docs_user/modules/con_adoption-prerequisites.adoc

+3
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ Planning information::
1111
* Review the adoption-specific networking requirements. For more information, see xref:configuring-network-for-RHOSO-deployment_planning[Configuring the network for the RHOSO deployment].
1212
* Review the adoption-specific storage requirements. For more information, see xref:storage-requirements_configuring-network[Storage requirements].
1313
* Review how to customize your deployed control plane with the services that are required for your environment. For more information, see link:{customizing-rhoso}/index[{customizing-rhoso-t}].
14+
ifeval::["{build_variant}" == "ospdo"]
15+
* Familiarize yourself with a disconnected environment deployment. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
16+
endif::[]
1417
* Familiarize yourself with the following {OpenShiftShort} concepts that are used during adoption:
1518
** link:{defaultOCPURL}/nodes/overview-of-nodes[Overview of nodes]
1619
** link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]

docs_user/modules/con_adoption-process-overview.adoc

+3
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ Familiarize yourself with the steps of the adoption process and the optional pos
99
. xref:migrating-tls-everywhere_configuring-network[Migrate TLS everywhere (TLS-e) to the Red Hat OpenStack Services on OpenShift (RHOSO) deployment].
1010
. xref:migrating-databases-to-the-control-plane_configuring-network[Migrate your existing databases to the new control plane].
1111
. xref:adopting-openstack-control-plane-services_configuring-network[Adopt your Red Hat OpenStack Platform 17.1 control plane services to the new RHOSO 18.0 deployment].
12+
ifeval::["{build_variant}" == "ospdo"]
13+
. xref:ospdo-scale-down-pre-database-adoption_configuring-network[Scaling down director Operator resources].
14+
endif::[]
1215
. xref:adopting-data-plane_adopt-control-plane[Adopt the RHOSO 18.0 data plane].
1316
. xref:migrating-the-object-storage-service_adopt-control-plane[Migrate the Object Storage service (swift) to the RHOSO nodes].
1417
. xref:ceph-migration_adopt-control-plane[Migrate the {Ceph} cluster].

docs_user/modules/con_comparing-configuration-files-between-deployments.adoc

+4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@
33
= Comparing configuration files between deployments
44

55
To help you manage the configuration for your {OpenStackPreviousInstaller} and {rhos_prev_long} ({OpenStackShort}) services, you can compare the configuration files between your {OpenStackPreviousInstaller} deployment and the {rhos_long} cloud by using the os-diff tool.
6+
ifeval::["{build_variant}" == "ospdo"]
7+
[NOTE]
8+
Os-diff does not currently support director Operator.
9+
endif::[]
610

711
.Prerequisites
812

docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc

-3
Original file line numberDiff line numberDiff line change
@@ -125,9 +125,6 @@ endif::[]
125125
ifeval::["{build}" == "downstream"]
126126
$(cat <path_to_SSH_key> | base64 | sed \'s/^/ /')
127127
endif::[]
128-
ifeval::["{build_variant}" == "ospdo"]
129-
$(oc exec -n $<ospdo_namespace> -t openstackclient openstackclient -- cat /home/cloud-admin/.ssh/id_rsa | base64 | sed \'s/^/ /')
130-
endif::[]
131128
EOF
132129
----
133130
+

docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc

+5
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,11 @@ EOF
5555
If you backed up your {rhos_prev_long} ({OpenStackShort}) services configuration file from the original environment, you can compare it with the confgiuration file that you adopted and ensure that the configuration is correct.
5656
For more information, see xref:pulling-configuration-from-tripleo-deployment_adopt-control-plane[Pulling the configuration from a {OpenStackPreviousInstaller} deployment].
5757
58+
ifeval::["{build_variant}" == "ospdo"]
59+
[NOTE]
60+
Os-diff does not currently support director Operator.
61+
endif::[]
62+
5863
----
5964
os-diff diff /tmp/collect_tripleo_configs/glance/etc/glance/glance-api.conf glance_patch.yaml --crd
6065
----

docs_user/modules/proc_deploying-backend-services.adoc

-7
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,6 @@ make input
9292
----
9393
endif::[]
9494

95-
ifeval::["{build_variant}" == "ospdo"]
96-
+
97-
----
98-
$ oc get secret tripleo-passwords -o jsonpath='{.data.*}' | base64 -d>~/tripleo-standalone-passwords.yaml
99-
----
100-
endif::[]
101-
10295
ifeval::["{build}" == "downstream"]
10396
. Create the {OpenStackShort} secret. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/{rhos_curr_ver}/html-single/deploying_red_hat_openstack_services_on_openshift/index#proc_providing-secure-access-to-the-RHOSO-services_preparing[Providing secure access to the Red Hat OpenStack Services on OpenShift services] in _Deploying Red Hat OpenStack Services on OpenShift_.
10497
endif::[]

docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc

+13
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,11 @@ $ STORAGE_CLASS=local-storage
2626
$ MARIADB_IMAGE=registry.redhat.io/rhoso/openstack-mariadb-rhel9:18.0
2727
endif::[]
2828
29+
ifeval::["{build_variant}" == "ospdo"]
30+
$ MARIADB_CLIENT_ANNOTATIONS='[{"name": internalapi-static, "ips": ["10.2.120.9/24"]}]'
31+
$ MARIADB_RUN_OVERRIDES="$MARIADB_CLIENT_ANNOTATIONS"
32+
endif::[]
33+
2934
$ CELLS="default" <1>
3035
$ DEFAULT_CELL_NAME="cell1"
3136
$ RENAMED_CELLS="$DEFAULT_CELL_NAME"
@@ -139,10 +144,18 @@ metadata:
139144
name: mariadb-copy-data
140145
annotations:
141146
openshift.io/scc: anyuid
147+
ifeval::["{build_variant}" != "ospdo"]
142148
k8s.v1.cni.cncf.io/networks: internalapi
149+
endif::[]
150+
ifeval::["{build_variant}" == "ospdo"]
151+
k8s.v1.cni.cncf.io/networks: ${MARIADB_RUN_OVERRIDES}
152+
endif::[]
143153
labels:
144154
app: adoption
145155
spec:
156+
ifeval::["{build_variant}" == "ospdo"]
157+
nodeName: <$CONTROLLER_NODE>
158+
endif::[]
146159
containers:
147160
- image: $MARIADB_IMAGE
148161
command: [ "sh", "-c", "sleep infinity"]

docs_user/modules/proc_migrating-ovn-data.adoc

+28
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,25 @@ SOURCE_OVSDB_IP=172.17.0.100 # For IPv4
2626
SOURCE_OVSDB_IP=[fd00:bbbb::100] # For IPv6
2727
----
2828
+
29+
ifeval::["{build_variant}" == "ospdo"]
30+
Update the IP address value for SOURCE_OVSDB_IP to the internalApi IP address associated with the remaining OSP 17 controller VM. The IP address can be retrieved by the following command:
31+
----
32+
$ $CONTROLLER_SSH ip a s enp4s0
33+
** Select the non /32 IP address
34+
----
35+
+
36+
[NOTE]
37+
If you use a disconnected environment director Operator deployment, use
38+
`OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9@sha256:967046c6bdb8f55c236085b5c5f9667f0dbb9f3ac52a6560dd36a6bfac051e1f`.
39+
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
40+
endif::[]
41+
ifeval::["{build_variant}" != "ospdo"]
2942
To get the value to set `SOURCE_OVSDB_IP`, query the puppet-generated configurations in a Controller node:
3043
+
3144
----
3245
$ grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
3346
----
47+
endif::[]
3448

3549
.Procedure
3650
ifeval::["{build_variant}" == "ospdo"]
@@ -299,28 +313,42 @@ ServicesToStop=("tripleo_ovn_cluster_north_db_server.service"
299313
300314
echo "Stopping systemd OpenStack services"
301315
for service in ${ServicesToStop[*]}; do
316+
ifeval::["{build_variant}" != "ospdo"]
302317
for i in {1..3}; do
303318
SSH_CMD=CONTROLLER${i}_SSH
319+
endif::[]
320+
ifeval::["{build_variant}" == "ospdo"]
321+
SSH_CMD=CONTROLLER_SSH
322+
endif::[]
304323
if [ ! -z "${!SSH_CMD}" ]; then
305324
echo "Stopping the $service in controller $i"
306325
if ${!SSH_CMD} sudo systemctl is-active $service; then
307326
${!SSH_CMD} sudo systemctl stop $service
308327
fi
309328
fi
329+
ifeval::["{build_variant}" != "ospdo"]
310330
done
331+
endif::[]
311332
done
312333
313334
echo "Checking systemd OpenStack services"
314335
for service in ${ServicesToStop[*]}; do
336+
ifeval::["{build_variant}" != "ospdo"]
315337
for i in {1..3}; do
316338
SSH_CMD=CONTROLLER${i}_SSH
339+
endif::[]
340+
ifeval::["{build_variant}" == "ospdo"]
341+
SSH_CMD=CONTROLLER_SSH
342+
endif::[]
317343
if [ ! -z "${!SSH_CMD}" ]; then
318344
if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then
319345
echo "ERROR: Service $service still running on controller $i"
320346
else
321347
echo "OK: Service $service is not running on controller $i"
322348
fi
323349
fi
350+
ifeval::["{build_variant}" != "ospdo"]
324351
done
352+
endif::[]
325353
done
326354
----

docs_user/modules/proc_ospdo-scale-down-pre-database-adoption.adoc

+95-4
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,98 @@
1-
[id="ospdo_scale_down_pre_database_adoption_{context}"]
1+
[id="ospdo-scale-down-pre-database-adoption_{context}"]
22

33
= Scaling down director Operator resources
44

55
Before you migrate your databases to the control plane, you must scale down and remove OpenStack director Operator (OSPdO) resources in order to use the {rhos_long} resources.
66

7+
You must perform the following actions:
8+
9+
* Dump selected data from the existing {OpenStackShort} {rhos_prev_ver} cluster. You use this data to build the custom resources for the data plane adoption.
10+
* After you extract and save the data, remove the OSPdO control plane and operator.
11+
712
.Procedure
13+
. Download the NIC templates:
14+
+
15+
----
16+
# Make temp directory if doesn't exist
17+
mkdir -p temp
18+
cd temp
19+
echo "Extract nic templates"
20+
oc get -n "${OSPDO_NAMESPACE}" cm tripleo-tarball -ojson | jq -r '.binaryData."tripleo-tarball.tar.gz"' | base64 -d | tar xzvf -
21+
# Revert back to original directory
22+
cd -
23+
----
24+
. Get the SSH key for accessing the data plane nodes:
25+
+
26+
----
27+
# Make temp directory if doesn't exist
28+
mkdir -p temp
29+
# Get the SSH key from the openstackclient (osp 17)
30+
# to be used later to create the SSH secret for dataplane adoption
31+
$OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa > temp/ssh.private
32+
$OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa.pub > temp/ssh.pub
33+
echo "SSH private and public keys saved in temp/ssh.private and temp/ssh.pub"
34+
----
35+
. Get the OVN configuration from each Compute node role, `OpenStackBaremetalSet`. This configuration is used later to build the `OpenStackDataPlaneNodeSet`(s). Repeat the following step for each `OpenStackBaremetalSet `:
36+
+
37+
----
38+
# Make temp directory if doesn't exist
39+
mkdir -p temp
40+
#
41+
# Query the first node in OSBMS
42+
#
43+
IP=$(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org <<OSBMS-NAME>> -ojson | jq -r '.status.baremetalHosts| keys[] as $k | .[$k].ipaddresses["ctlplane"]'| awk -F'/' '{print $1}')
44+
# Get the OVN parameters
45+
oc -n "${OSPDO_NAMESPACE}" exec -c openstackclient openstackclient -- \
46+
ssh cloud-admin@${IP} sudo ovs-vsctl -f json --columns=external_ids list Open |
47+
jq -r '.data[0][0][1][]|join("=")' | sed -n -E 's/^(ovn.*)+=(.*)+/edpm_\1: \2/p' |
48+
grep -v -e ovn-remote -e encap-tos -e openflow -e ofctrl > temp/<<OSBMS-NAME>>.txt
49+
----
50+
+
51+
----
52+
# Create temp directory if it does not exist
53+
mkdir -p temp
54+
for name in $(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org | awk 'NR > 1 {print $1}'); do
55+
oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org $name -ojson |
56+
jq -r '.status.baremetalHosts| "nodes:", keys[] as $k | .[$k].ipaddresses as $a |
57+
" \($k):",
58+
" hostName: \($k)",
59+
" ansible:",
60+
" ansibleHost: \($a["ctlplane"] | sub("/\\d+"; ""))",
61+
" networks:", ($a | to_entries[] | " - name: \(.key) \n fixedIP: \(.value | sub("/\\d+"; ""))\n subnetName: subnet1")' > temp/${name}-nodes.txt
62+
done
63+
----
64+
. Remove the conflicting repositories and packages from all Compute hosts. Define the OSPdO and {OpenStackShort} {rhos_prev_ver} Pacemaker services that must be stopped:
65+
+
66+
----
67+
PacemakerResourcesToStop_dataplane=(
68+
"galera-bundle"
69+
"haproxy-bundle"
70+
"rabbitmq-bundle")
871
72+
# Stop these PCM services after adopting control
73+
# plane, but before starting deletion of OSPD0 (osp17) env
74+
echo "Stopping pacemaker OpenStack services"
75+
SSH_CMD=CONTROLLER_SSH
76+
if [ -n "${!SSH_CMD}" ]; then
77+
echo "Using controller 0 to run pacemaker commands "
78+
for resource in "${PacemakerResourcesToStop_dataplane[@]}"; do
79+
if ${!SSH_CMD} sudo pcs resource config "$resource" &>/dev/null; then
80+
echo "Stopping $resource"
81+
${!SSH_CMD} sudo pcs resource disable "$resource"
82+
else
83+
echo "Service $resource not present"
84+
fi
85+
done
86+
fi
87+
----
988
. Scale down the {rhos_acro} OpenStack Operator `controller-manager` to 0 replicas and temporarily delete the `OpenStackControlPlane` `OpenStackClient` pod, so that you can use the OSPdO `controller-manager` to clean up some of its resources. The cleanup is needed to avoid a pod name collision between the OSPdO OpenStackClient and the {rhos_acro} OpenStackClient.
1089
+
1190
----
12-
$ oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"
91+
$ oc patch csv -n openstack-operators openstack-operator.v1.0.5 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"
1392
$ oc delete openstackclients.client.openstack.org --all
1493
----
1594
+
95+
* Replace the CSV version with the CSV version that is deployed in the cluster.
1696
. Delete the OSPdO `OpenStackControlPlane` custom resource (CR):
1797
+
1898
----
@@ -192,8 +272,19 @@ $ oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --
192272
+
193273
----
194274
$ compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}')
195-
$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";done
196-
$ oc delete bmh -n openshift-machine-api $bmh_compute;done
275+
$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";\
276+
oc -n openshift-machine-api wait bmh/$bmh_compute --for=jsonpath='{.status.operationalStatus}'=detached --timeout=30s || {
277+
echo "ERROR: BMH did not enter detatched state"
278+
exit 1
279+
}
280+
done
281+
----
282+
. Delete the `BareMetalHost` resource after its operational status is detached:
283+
+
284+
----
285+
for bmh_compute in $compute_bmh_list;do \
286+
oc -n openshift-machine-api delete bmh $bmh_compute; \
287+
done
197288
----
198289

199290
. Delete the OSPdO Operator Lifecycle Manager resources to remove OSPdO:

docs_user/modules/proc_preparing-RHOSO-for-director-operator-adoption.adoc

+1-8
Original file line numberDiff line numberDiff line change
@@ -306,15 +306,8 @@ $ oc get secret tripleo-passwords -n $OSPDO_NAMESPACE -o json | jq -r '.data["tr
306306
}
307307
----
308308

309-
. Install the {rhos_acro} operators:
309+
. Install the {rhos_acro} operators. For more information, see link:{deploying-rhoso}/index#assembly_installing-and-preparing-the-Operators[Installing and preparing the Operators] in _{deploying-rhoso-t}_.
310310
+
311-
----
312-
$ git clone https://github.com/openstack-k8s-operators/install_yamls.git
313-
cd install_yamls
314-
BMO_SETUP=false NETWORK_ISOLATION=false NAMESPACE=${RHOSO18_NAMESPACE} make openstack
315-
BMO_SETUP=false NETWORK_ISOLATION=false NAMESPACE=${RHOSO18_NAMESPACE} make openstack_init
316-
BMO_SETUP=false NETWORK_ISOLATION=false make metallb
317-
----
318311

319312

320313
. Apply the `IPAddressPool` resource that matches the new OpenStack 18 deployment to configure which IPs can be used as virtual IPs (VIPs):

docs_user/modules/proc_preparing-controller-nodes-for-director-operator-adoption.adoc

+36-2
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,8 @@ $ export CONTROLLER_NODE=$(oc -n $OSPDO_NAMESPACE get vmi -ojson | jq -r '.items
5050
.. Disable stonith:
5151
+
5252
----
53-
$ stonith_resources=$(sudo pcs stonith status|grep Started|awk '{print $2}')
54-
for in stonith_resources ;do CONTROLLER1_SSH sudo pcs stonith disable ;done
53+
$ stonith_resources=$($CONTROLLER1_SSH sudo pcs stonith status|grep Started|awk '{print $2}')
54+
for resource in stonith_resources ;do $CONTROLLER1_SSH sudo pcs stonith disable $resource;done
5555
----
5656
.. Put non-Controller nodes in maintenance:
5757
+
@@ -76,6 +76,40 @@ $CONTROLLER1_SSH sudo systemctl restart corosync
7676
$CONTROLLER1_SSH sudo pcs cluster stop
7777
$CONTROLLER1_SSH sudo pcs cluster start
7878
----
79+
+
80+
.. Check that only Controller-0 is started and that the other 2 Controllers are stopped:
81+
+
82+
----
83+
$CONTROLLER_SSH sudo pcs status
84+
----
85+
.. Check if the {OpenStackShort} control plane is still operational:
86+
+
87+
----
88+
$OS_CLIENT openstack compute service list
89+
----
90+
+
91+
[NOTE]
92+
You might need to wait a few minutes for the control plane to get operational. The control plane response slows down after this point.
93+
.. When Pacemaker is only managing one of the Controllers, delete 2 of the Controller VMs. The following example specifies Controller-1 and Controller-2 VMs for deletion:
94+
+
95+
----
96+
$ oc -n openstack annotate vm controller-1 osp-director.openstack.org/delete-host=true
97+
$ oc -n openstack annotate vm controller-2 osp-director.openstack.org/delete-host=true
98+
----
99+
.. Reduce the `roleCount` for the Controller role in the `OpenStackControlPlane` CR to `1`:
100+
+
101+
----
102+
$ oc -n openstack patch OpenStackControlPlane overcloud --type json -p '[{"op": "replace", "path":"/spec/virtualMachineRoles/controller/roleCount", "value": 1}]'
103+
----
104+
.. Ensure that the `OpenStackClient` pod is running on the same {OpenShiftShort} nodes as the remaining Controller VM. If the `OpenStackClient` pod is not on the same node, then move it by cordoning off the two nodes that have been freed up for {rhos_acro}. Then you delete the `OpenStackClient` pod so that it gets rescheduled on the {OpenShiftShort} node that has the remaining Controller VM. After the pod is moved to the correct node, uncordon all the nodes:
105+
+
106+
----
107+
$ oc adm cordon $OSP18_NODE1
108+
$ oc adm cordon $OSP18_NODE2
109+
$ oc delete pod openstackclient
110+
$ oc adm uncordon $OSP18_NODE1
111+
$ oc adm uncordon $OSP18_NODE2
112+
----
79113
.. Remove the OSPdO node network configuration policies from the other two nodes that are not running the Controller VM by placing a `nodeSelector` on the {OpenShiftShort} node that contains the Controller VM:
80114
+
81115
----

docs_user/modules/proc_retrieving-network-information-from-your-existing-deployment.adoc

+4
Original file line numberDiff line numberDiff line change
@@ -62,3 +62,7 @@ consumed and are not available for the new control plane services until the adop
6262

6363
. Repeat this procedure for each isolated network and each host in the
6464
configuration.
65+
66+
ifeval::["{build_variant}" == "ospdo"]
67+
For more information about director Operator network configurations, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[Creating networks with director Operator] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
68+
endif::[]

0 commit comments

Comments
 (0)