Skip to content

Commit bb37598

Browse files
committed
Apply required ospdo changes to adoption docs
Jira: https://issues.redhat.com/browse/OSPRH-14644
1 parent 588171f commit bb37598

18 files changed

+273
-52
lines changed

docs_user/adoption-attributes.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ ifeval::["{build}" == "downstream"]
149149
:telemetry: Telemetry service
150150
endif::[]
151151

152-
ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
152+
ifeval::["{build_variant}" == "ospdo"]
153153
:OpenStackPreviousInstaller: director Operator
154154
endif::[]
155155

docs_user/assemblies/assembly_adopting-the-data-plane.adoc

+2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ ifdef::context[:parent-context: {context}]
88

99
Adopting the {rhos_long} data plane involves the following steps:
1010

11+
ifeval::["{build_variant}" != "ospdo"]
1112
. Stop any remaining services on the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} control plane.
13+
endif::[]
1214
. Deploy the required custom resources.
1315
. Perform a fast-forward upgrade on Compute services from {OpenStackShort} {rhos_prev_ver} to {rhos_acro} {rhos_curr_ver}.
1416
. Adopt Networker services to the {rhos_acro} data plane.

docs_user/modules/con_adoption-limitations.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ The following {compute_service_first_ref} features are Technology Previews:
2929
* AMD SEV
3030
* Direct download from Rados Block Device (RBD)
3131
* File-backed memory
32-
* `Provider.yaml`
32+
** Multiple data plane node sets
3333

3434
.Unsupported features
3535

docs_user/modules/con_adoption-prerequisites.adoc

+3
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ Planning information::
1111
* Review the adoption-specific networking requirements. For more information, see xref:configuring-network-for-RHOSO-deployment_planning[Configuring the network for the RHOSO deployment].
1212
* Review the adoption-specific storage requirements. For more information, see xref:storage-requirements_configuring-network[Storage requirements].
1313
* Review how to customize your deployed control plane with the services that are required for your environment. For more information, see link:{customizing-rhoso}/index[{customizing-rhoso-t}].
14+
ifeval::["{build_variant}" == "ospdo"]
15+
* Familiarize yourself with a disconnected environment deployment. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
16+
endif::[]
1417
* Familiarize yourself with the following {OpenShiftShort} concepts that are used during adoption:
1518
** link:{defaultOCPURL}/nodes/overview-of-nodes[Overview of nodes]
1619
** link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]

docs_user/modules/con_adoption-process-overview.adoc

+3
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ Familiarize yourself with the steps of the adoption process and the optional pos
99
. xref:migrating-tls-everywhere_configuring-network[Migrate TLS everywhere (TLS-e) to the Red Hat OpenStack Services on OpenShift (RHOSO) deployment].
1010
. xref:migrating-databases-to-the-control-plane_configuring-network[Migrate your existing databases to the new control plane].
1111
. xref:adopting-openstack-control-plane-services_configuring-network[Adopt your Red Hat OpenStack Platform 17.1 control plane services to the new RHOSO 18.0 deployment].
12+
ifeval::["{build_variant}" == "ospdo"]
13+
. xref:ospdo_scale_down_pre_database_adoption_configuring-network[Scaling down director Operator resources].
14+
endif::[]
1215
. xref:adopting-data-plane_adopt-control-plane[Adopt the RHOSO 18.0 data plane].
1316
. xref:migrating-the-object-storage-service_adopt-control-plane[Migrate the Object Storage service (swift) to the RHOSO nodes].
1417
. xref:ceph-migration_adopt-control-plane[Migrate the {Ceph} cluster].

docs_user/modules/con_comparing-configuration-files-between-deployments.adoc

+4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@
33
= Comparing configuration files between deployments
44

55
To help you manage the configuration for your {OpenStackPreviousInstaller} and {rhos_prev_long} ({OpenStackShort}) services, you can compare the configuration files between your {OpenStackPreviousInstaller} deployment and the {rhos_long} cloud by using the os-diff tool.
6+
ifeval::["{build_variant}" == "ospdo"]
7+
[NOTE]
8+
At this time Os-diff does not support director Operator.
9+
endif::[]
610

711
.Prerequisites
812

docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc

-6
Original file line numberDiff line numberDiff line change
@@ -125,9 +125,6 @@ endif::[]
125125
ifeval::["{build}" == "downstream"]
126126
$(cat <path_to_SSH_key> | base64 | sed \'s/^/ /')
127127
endif::[]
128-
ifeval::["{build_variant}" == "ospdo"]
129-
$(oc exec -n $<ospdo_namespace> -t openstackclient openstackclient -- cat /home/cloud-admin/.ssh/id_rsa | base64 | sed \'s/^/ /')
130-
endif::[]
131128
EOF
132129
----
133130
+
@@ -281,14 +278,12 @@ ifeval::["{build}" == "downstream"]
281278
- redhat
282279
endif::[]
283280
- bootstrap
284-
- download-cache
285281
- configure-network
286282
- validate-network
287283
- install-os
288284
- configure-os
289285
- ssh-known-hosts
290286
- run-os
291-
- reboot-os
292287
- install-certs
293288
- libvirt
294289
- nova
@@ -474,7 +469,6 @@ ifeval::["{build}" == "downstream"]
474469
- redhat
475470
endif::[]
476471
- bootstrap
477-
- download-cache
478472
- configure-network
479473
- validate-network
480474
- install-os

docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc

+5
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,11 @@ EOF
5555
If you backed up your {rhos_prev_long} ({OpenStackShort}) services configuration file from the original environment, you can compare it with the confgiuration file that you adopted and ensure that the configuration is correct.
5656
For more information, see xref:pulling-configuration-from-tripleo-deployment_adopt-control-plane[Pulling the configuration from a {OpenStackPreviousInstaller} deployment].
5757
58+
ifeval::["{build_variant}" == "ospdo"]
59+
[NOTE]
60+
At this time Os-diff does not support director Operator.
61+
endif::[]
62+
5863
----
5964
os-diff diff /tmp/collect_tripleo_configs/glance/etc/glance/glance-api.conf glance_patch.yaml --crd
6065
----

docs_user/modules/proc_decommissioning-rhosp-standalone-ceph-NFS-service.adoc

+6
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,12 @@ $ oc patch openstackcontrolplane openstack --type=merge --patch-file=~/<manila.p
6464
You can defer this step until after {rhos_acro} {rhos_curr_ver} is operational. During this time, you cannot decommission the Controller nodes.
6565
+
6666
----
67+
ifeval::["{build_variant}" == "ospdo"]
68+
$ CONTROLLER_SSH sudo pcs resource disable ceph-nfs
69+
$ CONTROLLER_SSH sudo pcs resource disable ip-<VIP>
70+
$ CONTROLLER_SSH sudo pcs resource unmanage ceph-nfs
71+
$ CONTROLLER_SSH sudo pcs resource unmanage ip-<VIP>
72+
endif::[]
6773
$ sudo pcs resource disable ceph-nfs
6874
$ sudo pcs resource disable ip-<VIP>
6975
$ sudo pcs resource unmanage ceph-nfs

docs_user/modules/proc_deploying-backend-services.adoc

-7
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,6 @@ make input
9292
----
9393
endif::[]
9494

95-
ifeval::["{build_variant}" == "ospdo"]
96-
+
97-
----
98-
$ oc get secret tripleo-passwords -o jsonpath='{.data.*}' | base64 -d>~/tripleo-standalone-passwords.yaml
99-
----
100-
endif::[]
101-
10295
ifeval::["{build}" == "downstream"]
10396
. Create the {OpenStackShort} secret. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/{rhos_curr_ver}/html-single/deploying_red_hat_openstack_services_on_openshift/index#proc_providing-secure-access-to-the-RHOSO-services_preparing[Providing secure access to the Red Hat OpenStack Services on OpenShift services] in _Deploying Red Hat OpenStack Services on OpenShift_.
10497
endif::[]

docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc

+14
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,9 @@ The source cloud always uses the same password for cells databases. For that rea
119119
+
120120
[source,yaml]
121121
----
122+
ifeval::["{build_variant}" == "ospdo"]
123+
oc project <OSPDO_NAMESPACE>
124+
endif::[]
122125
oc apply -f - <<EOF
123126
---
124127
apiVersion: v1
@@ -137,12 +140,23 @@ apiVersion: v1
137140
kind: Pod
138141
metadata:
139142
name: mariadb-copy-data
143+
ifeval::["{build_variant}" == "ospdo"]
144+
namespace: <OSPDO_NAMESPACE>
145+
endif::[]
140146
annotations:
141147
openshift.io/scc: anyuid
148+
ifeval::["{build_variant}" != "ospdo"]
142149
k8s.v1.cni.cncf.io/networks: internalapi
150+
endif::[]
151+
ifeval::["{build_variant}" == "ospdo"]
152+
k8s.v1.cni.cncf.io/networks: '[{"name": internalapi-static, "namespace": <OSPDO_NAMESPACE>, "ips": ["10.2.120.9/24"]}]'
153+
endif::[]
143154
labels:
144155
app: adoption
145156
spec:
157+
ifeval::["{build_variant}" == "ospdo"]
158+
nodeName: <$CONTROLLER_NODE>
159+
endif::[]
146160
containers:
147161
- image: $MARIADB_IMAGE
148162
command: [ "sh", "-c", "sleep infinity"]

docs_user/modules/proc_migrating-ovn-data.adoc

+32-1
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,24 @@ SOURCE_OVSDB_IP=172.17.0.100 # For IPv4
2626
SOURCE_OVSDB_IP=[fd00:bbbb::100] # For IPv6
2727
----
2828
+
29+
ifeval::["{build_variant}" == "ospdo"]
30+
Update the IP address value for SOURCE_OVSDB_IP to the internalApi IP address associated with the remaining OSP 17 controller VM. The IP address can be retrieved by the following command:
31+
----
32+
$ $CONTROLLER_SSH ip a s enp4s0
33+
** Select the non /32 IP address
34+
----
35+
[NOTE]
36+
In the case of a disconnected environment Director Operator deployment, use
37+
OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9@sha256:967046c6bdb8f55c236085b5c5f9667f0dbb9f3ac52a6560dd36a6bfac051e1f
38+
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
39+
endif::[]
40+
ifeval::["{build_variant}" != "ospdo"]
2941
To get the value to set `SOURCE_OVSDB_IP`, query the puppet-generated configurations in a Controller node:
3042
+
3143
----
3244
$ grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
3345
----
46+
endif::[]
3447

3548
.Procedure
3649
ifeval::["{build_variant}" == "ospdo"]
@@ -46,13 +59,15 @@ endif::[]
4659
+
4760
[source,yaml]
4861
----
62+
ifeval::["{build_variant}" == "ospdo"]
63+
$ oc project <OSPDO_NAMESPACE>
64+
endif::[]
4965
$ oc apply -f - <<EOF
5066
---
5167
apiVersion: cert-manager.io/v1
5268
kind: Certificate
5369
metadata:
5470
name: ovn-data-cert
55-
namespace: openstack
5671
spec:
5772
commonName: ovn-data-cert
5873
secretName: ovn-data-cert
@@ -190,6 +205,8 @@ spec:
190205
ovnController:
191206
networkAttachment: tenant
192207
nodeSelector:
208+
node: non-existing-node-name
209+
193210
'
194211
----
195212

@@ -299,28 +316,42 @@ ServicesToStop=("tripleo_ovn_cluster_north_db_server.service"
299316
300317
echo "Stopping systemd OpenStack services"
301318
for service in ${ServicesToStop[*]}; do
319+
ifeval::["{build_variant}" != "ospdo"]
302320
for i in {1..3}; do
303321
SSH_CMD=CONTROLLER${i}_SSH
322+
endif::[]
323+
ifeval::["{build_variant}" == "ospdo"]
324+
SSH_CMD=CONTROLLER_SSH
325+
endif::[]
304326
if [ ! -z "${!SSH_CMD}" ]; then
305327
echo "Stopping the $service in controller $i"
306328
if ${!SSH_CMD} sudo systemctl is-active $service; then
307329
${!SSH_CMD} sudo systemctl stop $service
308330
fi
309331
fi
332+
ifeval::["{build_variant}" != "ospdo"]
310333
done
334+
endif::[]
311335
done
312336
313337
echo "Checking systemd OpenStack services"
314338
for service in ${ServicesToStop[*]}; do
339+
ifeval::["{build_variant}" != "ospdo"]
315340
for i in {1..3}; do
316341
SSH_CMD=CONTROLLER${i}_SSH
342+
endif::[]
343+
ifeval::["{build_variant}" == "ospdo"]
344+
SSH_CMD=CONTROLLER_SSH
345+
endif::[]
317346
if [ ! -z "${!SSH_CMD}" ]; then
318347
if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then
319348
echo "ERROR: Service $service still running on controller $i"
320349
else
321350
echo "OK: Service $service is not running on controller $i"
322351
fi
323352
fi
353+
ifeval::["{build_variant}" != "ospdo"]
324354
done
355+
endif::[]
325356
done
326357
----

docs_user/modules/proc_ospdo-scale-down-pre-database-adoption.adoc

+101-3
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,98 @@
44

55
Before you migrate your databases to the control plane, you must scale down and remove OpenStack director Operator (OSPdO) resources in order to use the {rhos_long} resources.
66

7+
You must perform the following actions:
8+
9+
* Dump selected data from the existing {OpenStackShort} {rhos_prev_ver} cluster. You use this data to build the custom resources for the data plane adoption.
10+
* After you extract and save the data, remove the OSPdO control plane and operator.
11+
712
.Procedure
13+
. Download NIC templates
14+
+
15+
----
16+
# Make temp directory if doesn't exist
17+
mkdir -p temp
18+
cd temp
19+
echo "Extract nic templates"
20+
oc get -n "${OSPDO_NAMESPACE}" cm tripleo-tarball -ojson | jq -r '.binaryData."tripleo-tarball.tar.gz"' | base64 -d | tar xzvf -
21+
# Revert back to original directory
22+
cd -
23+
----
24+
+
25+
. Get SSH Key for accessing the data plane nodes:
26+
+
27+
----
28+
# Make temp directory if doesn't exist
29+
mkdir -p temp
30+
# Get the SSH key from the openstackclient (osp 17)
31+
# to be used later to create the SSH secret for dataplane adoption
32+
$OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa > temp/ssh.private
33+
$OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa.pub > temp/ssh.pub
34+
echo "SSH private and public keys saved in temp/ssh.private and temp/ssh.pub"
35+
----
36+
+
37+
. Get OVN configuration from each compute node role (OpenStackBaremetalSet). This configuration will be used later on to build the OpenStackDataPlaneNodeSet(s)
38+
For each OpenStackBaremetalSet repeat the following:
39+
+
40+
----
41+
# Make temp directory if doesn't exist
42+
mkdir -p temp
43+
#
44+
# Query the first node in OSBMS
45+
#
46+
IP=$(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org <<OSBMS-NAME>> -ojson | jq -r '.status.baremetalHosts| keys[] as $k | .[$k].ipaddresses["ctlplane"]'| awk -F'/' '{print $1}')
47+
# Get the OVN parameters
48+
oc -n "${OSPDO_NAMESPACE}" exec -c openstackclient openstackclient -- \
49+
ssh cloud-admin@${IP} sudo ovs-vsctl -f json --columns=external_ids list Open |
50+
jq -r '.data[0][0][1][]|join("=")' | sed -n -E 's/^(ovn.*)+=(.*)+/edpm_\1: \2/p' |
51+
grep -v -e ovn-remote -e encap-tos -e openflow -e ofctrl > temp/<<OSBMS-NAME>>.txt
52+
----
53+
+
54+
----
55+
# Create temp directory if it does not exist
56+
mkdir -p temp
57+
for name in $(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org | awk 'NR > 1 {print $1}'); do
58+
oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org $name -ojson |
59+
jq -r '.status.baremetalHosts| "nodes:", keys[] as $k | .[$k].ipaddresses as $a |
60+
" \($k):",
61+
" hostName: \($k)",
62+
" ansible:",
63+
" ansibleHost: \($a["ctlplane"] | sub("/\\d+"; ""))",
64+
" networks:", ($a | to_entries[] | " - name: \(.key) \n fixedIP: \(.value | sub("/\\d+"; ""))\n subnetName: subnet1")' > temp/${name}-nodes.txt
65+
done
66+
----
67+
+
68+
. Remove the conflicting repositories and packages from all Compute hosts. Define the OSPdO and {OpenStackShort} {rhos_prev_ver} Pacemaker services that must be stopped:
69+
+
70+
----
71+
PacemakerResourcesToStop_dataplane=(
72+
"galera-bundle"
73+
"haproxy-bundle"
74+
"rabbitmq-bundle")
875
76+
# Stop these PCM services after adopting control
77+
# plane, but before starting deletion of OSPD0 (osp17) env
78+
echo "Stopping pacemaker OpenStack services"
79+
SSH_CMD=CONTROLLER_SSH
80+
if [ -n "${!SSH_CMD}" ]; then
81+
echo "Using controller 0 to run pacemaker commands "
82+
for resource in "${PacemakerResourcesToStop_dataplane[@]}"; do
83+
if ${!SSH_CMD} sudo pcs resource config "$resource" &>/dev/null; then
84+
echo "Stopping $resource"
85+
${!SSH_CMD} sudo pcs resource disable "$resource"
86+
else
87+
echo "Service $resource not present"
88+
fi
89+
done
90+
fi
91+
----
92+
+
993
. Scale down the {rhos_acro} OpenStack Operator `controller-manager` to 0 replicas and temporarily delete the `OpenStackControlPlane` `OpenStackClient` pod, so that you can use the OSPdO `controller-manager` to clean up some of its resources. The cleanup is needed to avoid a pod name collision between the OSPdO OpenStackClient and the {rhos_acro} OpenStackClient.
94+
[NOTE]
95+
Replace the CSV version in the following command with the CSV version that is deployed in the cluster:
1096
+
1197
----
12-
$ oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"
98+
$ oc patch csv -n openstack-operators openstack-operator.v1.0.5 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"
1399
$ oc delete openstackclients.client.openstack.org --all
14100
----
15101
+
@@ -192,8 +278,20 @@ $ oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --
192278
+
193279
----
194280
$ compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}')
195-
$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";done
196-
$ oc delete bmh -n openshift-machine-api $bmh_compute;done
281+
$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";\
282+
oc -n openshift-machine-api wait bmh/$bmh_compute --for=jsonpath='{.status.operationalStatus}'=detached --timeout=30s || {
283+
echo "ERROR: BMH did not enter detatched state"
284+
exit 1
285+
}
286+
done
287+
----
288+
+
289+
. Delete the `BareMetalHost` resource after its operational status is detached:
290+
+
291+
----
292+
for bmh_compute in $compute_bmh_list;do \
293+
oc -n openshift-machine-api delete bmh $bmh_compute; \
294+
done
197295
----
198296

199297
. Delete the OSPdO Operator Lifecycle Manager resources to remove OSPdO:

docs_user/modules/proc_preparing-RHOSO-for-director-operator-adoption.adoc

+1-6
Original file line numberDiff line numberDiff line change
@@ -309,12 +309,7 @@ $ oc get secret tripleo-passwords -n $OSPDO_NAMESPACE -o json | jq -r '.data["tr
309309
. Install the {rhos_acro} operators:
310310
+
311311
----
312-
$ git clone https://github.com/openstack-k8s-operators/install_yamls.git
313-
cd install_yamls
314-
BMO_SETUP=false NETWORK_ISOLATION=false NAMESPACE=${RHOSO18_NAMESPACE} make openstack
315-
BMO_SETUP=false NETWORK_ISOLATION=false NAMESPACE=${RHOSO18_NAMESPACE} make openstack_init
316-
BMO_SETUP=false NETWORK_ISOLATION=false make metallb
317-
----
312+
Follow the Openstack Service On Openshift procedure, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/deploying_red_hat_openstack_services_on_openshift/index#assembly_installing-and-preparing-the-Operators[Installing the Openstack Operator].
318313
319314
320315
. Apply the `IPAddressPool` resource that matches the new OpenStack 18 deployment to configure which IPs can be used as virtual IPs (VIPs):

0 commit comments

Comments
 (0)