|
4 | 4 |
|
5 | 5 | Before you migrate your databases to the control plane, you must scale down and remove OpenStack director Operator (OSPdO) resources in order to use the {rhos_long} resources.
|
6 | 6 |
|
| 7 | +You must perform the following actions: |
| 8 | + |
| 9 | +* Dump selected data from the existing {OpenStackShort} {rhos_prev_ver} cluster. You use this data to build the custom resources for the data plane adoption. |
| 10 | +* After you extract and save the data, remove the OSPdO control plane and operator. |
| 11 | + |
7 | 12 | .Procedure
|
| 13 | +. Download NIC templates |
| 14 | ++ |
| 15 | +---- |
| 16 | + # Make temp directory if doesn't exist |
| 17 | + mkdir -p temp |
| 18 | + cd temp |
| 19 | + echo "Extract nic templates" |
| 20 | + oc get -n "${OSPDO_NAMESPACE}" cm tripleo-tarball -ojson | jq -r '.binaryData."tripleo-tarball.tar.gz"' | base64 -d | tar xzvf - |
| 21 | + # Revert back to original directory |
| 22 | + cd - |
| 23 | +---- |
| 24 | ++ |
| 25 | +. Get SSH Key for accessing the data plane nodes: |
| 26 | ++ |
| 27 | +---- |
| 28 | + # Make temp directory if doesn't exist |
| 29 | + mkdir -p temp |
| 30 | + # Get the SSH key from the openstackclient (osp 17) |
| 31 | + # to be used later to create the SSH secret for dataplane adoption |
| 32 | + $OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa > temp/ssh.private |
| 33 | + $OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa.pub > temp/ssh.pub |
| 34 | + echo "SSH private and public keys saved in temp/ssh.private and temp/ssh.pub" |
| 35 | +---- |
| 36 | ++ |
| 37 | +. Get OVN configuration from each compute node role (OpenStackBaremetalSet). This configuration will be used later on to build the OpenStackDataPlaneNodeSet(s) |
| 38 | +For each OpenStackBaremetalSet repeat the following: |
| 39 | ++ |
| 40 | +---- |
| 41 | + # Make temp directory if doesn't exist |
| 42 | + mkdir -p temp |
| 43 | + # |
| 44 | + # Query the first node in OSBMS |
| 45 | + # |
| 46 | + IP=$(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org <<OSBMS-NAME>> -ojson | jq -r '.status.baremetalHosts| keys[] as $k | .[$k].ipaddresses["ctlplane"]'| awk -F'/' '{print $1}') |
| 47 | + # Get the OVN parameters |
| 48 | + oc -n "${OSPDO_NAMESPACE}" exec -c openstackclient openstackclient -- \ |
| 49 | + ssh cloud-admin@${IP} sudo ovs-vsctl -f json --columns=external_ids list Open | |
| 50 | + jq -r '.data[0][0][1][]|join("=")' | sed -n -E 's/^(ovn.*)+=(.*)+/edpm_\1: \2/p' | |
| 51 | + grep -v -e ovn-remote -e encap-tos -e openflow -e ofctrl > temp/<<OSBMS-NAME>>.txt |
| 52 | +---- |
| 53 | ++ |
| 54 | +---- |
| 55 | + # Create temp directory if it does not exist |
| 56 | + mkdir -p temp |
| 57 | + for name in $(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org | awk 'NR > 1 {print $1}'); do |
| 58 | + oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org $name -ojson | |
| 59 | + jq -r '.status.baremetalHosts| "nodes:", keys[] as $k | .[$k].ipaddresses as $a | |
| 60 | + " \($k):", |
| 61 | + " hostName: \($k)", |
| 62 | + " ansible:", |
| 63 | + " ansibleHost: \($a["ctlplane"] | sub("/\\d+"; ""))", |
| 64 | + " networks:", ($a | to_entries[] | " - name: \(.key) \n fixedIP: \(.value | sub("/\\d+"; ""))\n subnetName: subnet1")' > temp/${name}-nodes.txt |
| 65 | + done |
| 66 | +---- |
| 67 | ++ |
| 68 | +. Remove the conflicting repositories and packages from all Compute hosts. Define the OSPdO and {OpenStackShort} {rhos_prev_ver} Pacemaker services that must be stopped: |
| 69 | ++ |
| 70 | +---- |
| 71 | +PacemakerResourcesToStop_dataplane=( |
| 72 | + "galera-bundle" |
| 73 | + "haproxy-bundle" |
| 74 | + "rabbitmq-bundle") |
8 | 75 |
|
| 76 | +# Stop these PCM services after adopting control |
| 77 | +# plane, but before starting deletion of OSPD0 (osp17) env |
| 78 | + echo "Stopping pacemaker OpenStack services" |
| 79 | + SSH_CMD=CONTROLLER_SSH |
| 80 | + if [ -n "${!SSH_CMD}" ]; then |
| 81 | + echo "Using controller 0 to run pacemaker commands " |
| 82 | + for resource in "${PacemakerResourcesToStop_dataplane[@]}"; do |
| 83 | + if ${!SSH_CMD} sudo pcs resource config "$resource" &>/dev/null; then |
| 84 | + echo "Stopping $resource" |
| 85 | + ${!SSH_CMD} sudo pcs resource disable "$resource" |
| 86 | + else |
| 87 | + echo "Service $resource not present" |
| 88 | + fi |
| 89 | + done |
| 90 | + fi |
| 91 | +---- |
| 92 | ++ |
9 | 93 | . Scale down the {rhos_acro} OpenStack Operator `controller-manager` to 0 replicas and temporarily delete the `OpenStackControlPlane` `OpenStackClient` pod, so that you can use the OSPdO `controller-manager` to clean up some of its resources. The cleanup is needed to avoid a pod name collision between the OSPdO OpenStackClient and the {rhos_acro} OpenStackClient.
|
| 94 | +[NOTE] |
| 95 | +Replace the CSV version in the following command with the CSV version that is deployed in the cluster: |
10 | 96 | +
|
11 | 97 | ----
|
12 |
| -$ oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" |
| 98 | +$ oc patch csv -n openstack-operators openstack-operator.v1.0.5 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" |
13 | 99 | $ oc delete openstackclients.client.openstack.org --all
|
14 | 100 | ----
|
15 | 101 | +
|
@@ -192,8 +278,20 @@ $ oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --
|
192 | 278 | +
|
193 | 279 | ----
|
194 | 280 | $ compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}')
|
195 |
| -$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";done |
196 |
| -$ oc delete bmh -n openshift-machine-api $bmh_compute;done |
| 281 | +$ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";\ |
| 282 | + oc -n openshift-machine-api wait bmh/$bmh_compute --for=jsonpath='{.status.operationalStatus}'=detached --timeout=30s || { |
| 283 | + echo "ERROR: BMH did not enter detatched state" |
| 284 | + exit 1 |
| 285 | + } |
| 286 | +done |
| 287 | +---- |
| 288 | ++ |
| 289 | +. Delete the `BareMetalHost` resource after its operational status is detached: |
| 290 | ++ |
| 291 | +---- |
| 292 | + for bmh_compute in $compute_bmh_list;do \ |
| 293 | + oc -n openshift-machine-api delete bmh $bmh_compute; \ |
| 294 | + done |
197 | 295 | ----
|
198 | 296 |
|
199 | 297 | . Delete the OSPdO Operator Lifecycle Manager resources to remove OSPdO:
|
|
0 commit comments