Skip to content

Commit 29de512

Browse files
committed
Revert "removed DMS as per discussion with Kshitija and Shanshan over starlight slack channel"
This reverts commit aefb75e.
1 parent 1f2fbbf commit 29de512

29 files changed

+1545
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
title: "Applying constraints"
3+
---
4+
5+
At the beginning of your data migration journey with EDB Data Migration Service (EDB DMS), you [prepared and imported the schema](prepare_schema) of your source database. Now, connect to the target database and re-apply the constraints that were excluded from the schema and data migration.
6+
7+
## Primary key and unique constraints
8+
9+
For primary key and unique constraints, you have already created the tables and constraints in the target Postgres database. This allowed EDB DMS to map them to the source objects and migrate data successfuly. You don't need do to anything else.
10+
11+
The same applies to not null constraints if you included them in your schema import.
12+
13+
## Foreign key, check, and exclusion constraints
14+
15+
You can now re-apply any foreign key, check, or exclusion constraints you excluded during the [schema preparation and import](prepare_schema).
16+
17+
## Ensuring data integrity
18+
19+
Rows in tables that don't have primary key or unique constraints were migrated with at-least-once delivery, therefore, it is possible that these rows are duplicate.
20+
21+
Deduplication can be performed as part of the [verification](verify_migration).
Lines changed: 195 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
---
2+
title: "Configuring and running the EDB DMS Reader"
3+
deepToC: true
4+
redirects:
5+
- /purl/dms/configure_source
6+
---
7+
8+
## Getting credentials
9+
10+
1. Access the [EDB Postgres AI® Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials.
11+
12+
1. Select the project where you created the database cluster.
13+
14+
1. Within your project, select **Migrate** > **Credentials**.
15+
16+
1. Select **Create Migration Credential** > **Download Credential**.
17+
18+
1. Unzip the credentials folder and copy it to the host where the reader is installed.
19+
20+
## Configuring the reader
21+
22+
1. Open the EDB DMS reader located in `/opt/cdcreader/run-cdcreader.sh` and ensure you have write permissions.
23+
24+
1. Set the variables according to your environment and uncomment the edited lines. See [parameters](#parameters) for further guidance. The script is reproduced below.
25+
26+
```shell
27+
#!/bin/bash -e
28+
# run_cdcreader.sh
29+
#
30+
# This script provides a convenient place to specify
31+
# environment variables used to configure the
32+
# EDB Data Migration Service Reader.
33+
#
34+
# After env exports, `java` is called to start the
35+
# software.
36+
37+
##########################################
38+
# DMS Reader General Configuration #
39+
##########################################
40+
41+
# This ID is used to identify DMS Reader
42+
# and is specified by the user.
43+
#export DBZ_ID=
44+
45+
# Supported options include: appliance (the hybrid PG AI platform), aws
46+
#export CLOUD_PROVIDER=
47+
48+
# This is the DMS backend service used by the Reader
49+
# If your CLOUD_PROVIDER is `appliance`, consult your system administrators
50+
# The default value supports the `aws` CLOUD_PROVIDER
51+
#export RW_SERVICE_HOST=https://transporter-rw-service.biganimal.com
52+
53+
# You need to create migration credentials in EDB Postgres AI platform
54+
# and set these fields with the path of the credential files
55+
#export TLS_PRIVATE_KEY_PATH=$HOME/credentials/client-key.pem
56+
#export TLS_CERTIFICATE_PATH=$HOME/credentials/client-cert.pem
57+
#export TLS_CA_PATH=$HOME/credentials/int.crt
58+
#export APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION=$HOME/credentials/client.keystore.p12
59+
#export APICURIOREQUEST_TRUSTSTORE_LOCATION=$HOME/credentials/int.truststore.p12
60+
#export KAFKASECURITY_CLIENT_KEYSTORE_LOCATION=$HOME/credentials/client.keystore.p12
61+
#export KAFKASECURITY_TRUSTSTORE_LOCATION=$HOME/credentials/int.truststore.p12
62+
63+
##########################################
64+
# DMS Reader Source DB Configuration #
65+
##########################################
66+
67+
# A sample configuration to create a single postgres database connection:
68+
#export DBZ_DATABASES_0__TYPE=POSTGRES
69+
#export DBZ_DATABASES_0__HOSTNAME=localhost
70+
#export DBZ_DATABASES_0__PORT=5432
71+
# The CATALOG is the database name
72+
#export DBZ_DATABASES_0__CATALOG=source
73+
#export DBZ_DATABASES_0__USERNAME=postgres
74+
# The password env can be set without specifing it here
75+
# but the env structure looks like this
76+
#export DBZ_DATABASES_0__PASSWORD=password
77+
78+
# You can increase the index to configure more than
79+
# one database for the DMS Reader
80+
#export DBZ_DATABASES_1__TYPE=ORACLE
81+
#export DBZ_DATABASES_1__HOSTNAME=localhost
82+
#export DBZ_DATABASES_1__PORT=1521
83+
# The CATALOG is the database name
84+
#export DBZ_DATABASES_1__CATALOG=ORCLCDB/ORCLPDB1
85+
#export DBZ_DATABASES_1__USERNAME=oracle
86+
# The password env can be set without specifing it here
87+
# but the env structure looks like this
88+
#export DBZ_DATABASES_1__PASSWORD=password
89+
90+
##########################################
91+
# Optional Parameters Below #
92+
##########################################
93+
94+
# Configure logging
95+
# Generic loglevel
96+
#export QUARKUS_LOG_LEVEL=DEBUG
97+
# Loglevel for a single package
98+
#export QUARKUS_LOG_CATEGORY__COM_ENTERPRISEDB__LEVEL=DEBUG
99+
100+
cd $(dirname $0)
101+
java ${JAVA_OPTS} -jar quarkus-run.jar
102+
```
103+
104+
## Parameters
105+
106+
### `DBZ_ID`
107+
108+
This is the name you assign to identify a source. This name will later appear as a _source_ in the **Migrate** > **Sources** section of the EDB Postgres AI Console.
109+
110+
Consider the following ID guidelines:
111+
112+
- The maximum character length for the ID is 255 characters.
113+
- You can use lowercase and uppercase characters, numbers, underscores(_) and hyphens(-) for the ID. Other special characters are not supported.
114+
- The ID must be unique. The source instances cannot have the same ID.
115+
116+
### `RW_SERVICE_HOST`
117+
118+
Specifies the URL of the service that will host the migration. `transporter-rw-service` is always https://transporter-rw-service.biganimal.com.
119+
120+
### `TLS_PRIVATE_KEY_PATH`
121+
122+
Directory path to the `client-key.pem` private key you downloaded from the EDB Postgres AI Console.
123+
124+
The HTTP client of the EDB DMS Reader uses it to perform mTLS authentication with the `transporter-rw-service`.
125+
126+
### `TLS_CERTIFICATE_PATH`
127+
128+
Directory path to the X509 `client-cert.pem` certificate you downloaded from the EDB Postgres AI Console.
129+
130+
The HTTP client of the EDB DMS Reader uses it to perform mTLS authentication with the `transporter-rw-service`.
131+
132+
### `TLS_CA_PATH`
133+
134+
Directory path to the `int.cert` Certificate Authority you downloaded from the EDB Postgres AI Console.
135+
136+
It signs the certificate configured in `TLS_CERTIFICATE_PATH`.
137+
138+
### `APICURIOREQUEST_CLIENT_KEYSTORE_LOCATION`
139+
140+
Directory path to the `client-keystore.p12` keystore location file you downloaded from the EDB Postgres AI Console.
141+
It is created from the private key and certificate configured in `TLS_PRIVATE_KEY_PATH` and `TLS_CERTIFICATE_PATH`.
142+
143+
The Apicurio client uses it to perform mTLS authentication with the `transporter-rw-service`.
144+
145+
### `APICURIOREQUEST_TRUSTSTORE_LOCATION`
146+
147+
Created from the Certificate Authority configured in `TLS_CA_PATH`.
148+
149+
The Apicurio client uses it to perform mTLS authentication with the `transporter-rw-service`.
150+
151+
### `DBZ_DATABASES`
152+
153+
This is a list of source database information you require for the EDB DMS Reader be able to read the correct source database information for the migration.
154+
155+
You can configure the EDB DMS Reader to migrate multiple databases. The `DBZ_DATABASES_0__TYPE` section delimits the information for the first database. You can use `DBZ_DATABASES_1__TYPE` to provide data for a second database. Add more sections to the EDB DMS Reader (`DBZ_DATABASES_2__TYPE`, `DBZ_DATABASES_3__TYPE`) by increasing the index manully.
156+
157+
#### `DBZ_DATABASES_0__TYPE`
158+
159+
This is the source database type. EDB DMS reader supports `ORACLE` and `POSTGRES`.
160+
161+
#### `DBZ_DATABASES_0__HOSTNAME`
162+
163+
The hostname of the source database.
164+
165+
#### `DBZ_DATABASES_0__PORT`
166+
167+
The port of the source database.
168+
169+
#### `DBZ_DATABASES_0__CATALOG`
170+
171+
The database name in the source database server.
172+
173+
#### `DBZ_DATABASES_0__USERNAME`
174+
175+
The database username of the source database.
176+
177+
#### `DBZ_DATABASES_0__PASSWORD`
178+
179+
The password for the database username of the source database.
180+
181+
182+
## Running the EDB DMS Reader
183+
184+
1. Start the migration:
185+
186+
```shell
187+
cd /opt/cdcreader
188+
./run-cdcreader.sh
189+
```
190+
191+
1. Go to the [EDB Postgres AI Console](https://portal.biganimal.com), and verify that a source with the `DBZ_ID` name is displayed in **Migrate** > **Sources**.
192+
193+
You can select this source for your [migration](create_migration).
194+
195+
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
title: "Creating a database cluster"
3+
---
4+
5+
You can use an existing EDB Postgres® AI cluster or create a new cluster for the target of the database migration.
6+
7+
To use an existing cluster as a target for the migration, ensure the tables you migrate and the load generated on target don't interfere with existing workloads.
8+
9+
1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Cloud Service credentials.
10+
11+
2. Select the project where you want to create the database cluster.
12+
13+
See [Creating a project](/edb-postgres-ai/console/using/projects/managing_projects/#creating-a-new-project) if you want to create one.
14+
15+
3. Within your project, select **Create New** and **Database Cluster** to create an instance that will serve as target for the EDB Data Migration Service (EDB DMS).
16+
17+
See [Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/) for detailed instructions on how to create a single-node or a primary/standby high availability cluster.
18+
19+
See [Creating a distributed high-availability cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_dha_cluster/) for detailed instructions on how to create a distributed high availibility cluster.
20+
21+
4. In the **Clusters** page, select your cluster, and use the **Quick Connect** option to access your instance from your terminal.
22+
23+
5. Create a new empty database that you will use as a target for the migration. Alternatively, you can use the default database `edb_admin`.
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
title: "Creating a migration"
3+
---
4+
5+
After you use the EDB DMS Reader to read the source database, create a new migration in the EDB Postgres® AI Console.
6+
This establishes a sync between the source database and a target cluster in the EDB Postgres AI Console.
7+
8+
1. Access the [EDB Postgres AI Console](https://portal.biganimal.com) and log in with your EDB Postgres AI Database Cloud Service credentials.
9+
10+
1. Select the project where you created the database cluster.
11+
12+
1. Within your project, select **Migrate** > **Migrations**.
13+
14+
1. In the **Migrations** page, select **Create New Migration** > **To Managed Postgres**.
15+
16+
1. In the **Create Migration** page, assign a **Name** to the migration.
17+
18+
1. Select the **Source** of the migration. The ID for the EDB DMS Reader is listed in the drop-down menu.
19+
20+
1. Under **Destination**, select a target cluster for the migration and enter the name of the database where you want the migration to copy data and select **Next**.
21+
22+
1. Select the tables and columns to migrate. Modify the table and column names if needed.
23+
24+
1. Select **Create Migration**.
25+
26+
The EDB Postgres AI Console now displays a new migration. The EDB DMS Reader is constantly streaming data when the migration displays the **Running** state. Changes to data are replicated from the source to the target database as long as the migration is running.
27+
28+
!!!note
29+
The EDB DMS Reader streams data changes. It does not stream changes in the DDL objects.
30+
!!!
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
---
2+
title: "Getting started"
3+
description: Understand how to create a migration from planning to execution.
4+
indexCards: none
5+
navigation:
6+
- create_database
7+
- prepare_schema
8+
- installing
9+
- preparing_db
10+
- config_reader
11+
- create_migration
12+
- mark_completed
13+
- apply_constraints
14+
- verify_migration
15+
---
16+
17+
Creating a migration from an Oracle or Postgres database to EDB Postgres AI involves several steps.
18+
19+
1. **[Create a target Postgres database cluster](create_database)**: In the EDB Postgres® AI Console, ensure you have created a database cluster. Connect to the cluster and create a database that will serve as a target for the migration.
20+
21+
1. **[Prepare the source database schema](prepare_schema)**: In your source machine, prepare the source database by exporting it and excluding unsupported constraints. Then, import the adapted schema to the target database.
22+
23+
1. **[Install the EDB DMS Reader](installing)**: In your source machine, install the EDB DMS Reader from the EDB repository.
24+
25+
1. **[Prepare your source Oracle or Postgres database](preparing_db)**: In your source machine, prepare the source database by altering settings and creating users that are required for the migration. Ensure your source database can accept SSL connections.
26+
27+
1. **[Configure the EDB DMS Reader](config_reader)**: In the EDB Postgres AI Console, download dedicated migration credentials. In your source machine, configure the EDB DMS Reader by exporting environment variables that allow the Reader to connect to the source. Execute the Reader.
28+
29+
1. **[Create a new migration](create_migration)**: In the EDB Postgres AI Console, create a new migration by selecting the source generated by the Reader in the Console, and selecting the target database you created for this purpose.
30+
31+
1. **[Mark the Migration as completed](mark_completed)**: In the EDB Postgres AI Console, mark the migration as completed to stop the streaming process.
32+
33+
1. **[Reapply any excluded constraints](apply_constraints)**: Apply the constraints you excluded from the schema migration in the new database.
34+
35+
1. **[Verify the migration completed successfully](verify_migration)**: Use LiveCompare to ensure the target database has the same data as the source database.
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
navTitle: Installing
3+
title: Installing EDB Data Migration Service Reader on Linux
4+
indexCards: none
5+
6+
navigation:
7+
- linux_x86_64
8+
---
9+
10+
Select a link to access the applicable installation instructions:
11+
12+
## Linux [x86-64 (amd64)](linux_x86_64)
13+
14+
### Red Hat Enterprise Linux (RHEL) and derivatives
15+
16+
- [RHEL 9](linux_x86_64/edb-dms-reader_rhel_9), [RHEL 8](linux_x86_64/edb-dms-reader_rhel_8)
17+
18+
- [Oracle Linux (OL) 9](linux_x86_64/edb-dms-reader_rhel_9), [Oracle Linux (OL) 8](linux_x86_64/edb-dms-reader_rhel_8)
19+
20+
- [Rocky Linux 9](linux_x86_64/edb-dms-reader_other_linux_9)
21+
22+
- [AlmaLinux 9](linux_x86_64/edb-dms-reader_other_linux_9)
23+
24+
### SUSE Linux Enterprise (SLES)
25+
26+
- [SLES 15](linux_x86_64/edb-dms-reader_sles_15)
27+
28+
### Debian and derivatives
29+
30+
- [Ubuntu 22.04](linux_x86_64/edb-dms-reader_ubuntu_22), [Ubuntu 20.04](linux_x86_64/edb-dms-reader_ubuntu_20)
31+
32+
- [Debian 12](linux_x86_64/edb-dms-reader_debian_12), [Debian 11](linux_x86_64/edb-dms-reader_debian_11)
33+
34+
## Linux [AArch64 (ARM64)](linux_arm64)
35+
36+
### Red Hat Enterprise Linux (RHEL) and derivatives
37+
38+
- [RHEL 9](linux_arm64/edb-dms-reader_rhel_9)
39+
40+
- [Oracle Linux (OL) 9](linux_arm64/edb-dms-reader_rhel_9)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
navTitle: RHEL 9 or OL 9
3+
title: Installing EDB Data Migration Service Reader on RHEL 9 or OL 9 arm64
4+
---
5+
6+
## Prerequisites
7+
8+
Before you begin the installation process:
9+
10+
- Set up the EDB repository.
11+
12+
Setting up the repository is a one-time task. If you have already set up your repository, you don't need to perform this step.
13+
14+
To determine if your repository exists, enter this command:
15+
16+
`dnf repolist | grep enterprisedb`
17+
18+
If no output is generated, the repository isn't installed.
19+
20+
To set up the EDB repository:
21+
22+
1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads).
23+
24+
1. Select the button that provides access to the EDB repository.
25+
26+
1. Select the platform and software that you want to download.
27+
28+
1. Follow the instructions for setting up the EDB repository.
29+
30+
## Install the package
31+
32+
Install the EDB DMS Reader (packaged as `cdcreader`):
33+
34+
```shell
35+
sudo dnf install cdcreader
36+
```

0 commit comments

Comments
 (0)