From 6d8c29b4078a03c7985d3764c80bb0aef8892be7 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 24 Nov 2023 16:12:19 +0800 Subject: [PATCH 1/9] Update changefeed-sink-to-apache-kafka.md --- tidb-cloud/changefeed-sink-to-apache-kafka.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md index d29babcc7e609..f1ec8d92a3e54 100644 --- a/tidb-cloud/changefeed-sink-to-apache-kafka.md +++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md @@ -66,14 +66,14 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources ## Step 2. Configure the changefeed target 1. Under **Brokers Configuration**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints. -2. Select your Kafka version. If you do not know that, use Kafka V2. -3. Select a desired compression type for the data in this changefeed. -4. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection. -5. Select the **Authentication** option according to your Kafka authentication configuration. +2. Select the **Authentication** option according to your Kafka authentication configuration. - If your Kafka does not require authentication, keep the default option **DISABLE**. - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the user name and password of your Kafka account for authentication. +3. Select your Kafka version. If you do not know that, use Kafka V2. +4. Select a desired compression type for the data in this changefeed. +5. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection. 6. Click **Next** to check the configurations you set and go to the next page. ## Step 3. Set the changefeed From 4c33756f759c3510439e78380bbbd5098584f85d Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 24 Nov 2023 16:20:20 +0800 Subject: [PATCH 2/9] Apply suggestions from code review Co-authored-by: Lilian Lee --- tidb-cloud/changefeed-sink-to-apache-kafka.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md index f1ec8d92a3e54..3fa4065f88195 100644 --- a/tidb-cloud/changefeed-sink-to-apache-kafka.md +++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md @@ -66,9 +66,9 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources ## Step 2. Configure the changefeed target 1. Under **Brokers Configuration**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints. -2. Select the **Authentication** option according to your Kafka authentication configuration. +2. Select the authentication option according to your Kafka authentication configuration. - - If your Kafka does not require authentication, keep the default option **DISABLE**. + - If your Kafka does not require authentication, keep the default option **Disable**. - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the user name and password of your Kafka account for authentication. 3. Select your Kafka version. If you do not know that, use Kafka V2. From 7e4863eb76b508035cc470e3864c0777822bf0e8 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 24 Nov 2023 16:28:53 +0800 Subject: [PATCH 3/9] add Supported maximum number of tables for DM --- tidb-cloud/tidb-cloud-billing-dm.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index fe2f799576f47..386903245e753 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -13,12 +13,14 @@ TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units The following table lists the specifications and corresponding performances for Data Migration. -| Specification | Full data migration | Incremental data migration | -|---------------|---------------------|----------------------------| -| 2 RCUs | 25 MiB/s | 10,000 rows/s| -| 4 RCUs | 35 MiB/s | 20,000 rows/s| -| 8 RCUs | 40 MiB/s | 40,000 rows/s| -| 16 RCUs | 45 MiB/s | 80,000 rows/s| +| Specification | Full data migration | Incremental data migration | Supported maximum number of tables | +|---------------|---------------------|----------------------------|-----------------------| +| 2 RCUs | 25 MiB/s | 10,000 rows/s | 500 | +| 4 RCUs | 35 MiB/s | 20,000 rows/s | 10000 | +| 8 RCUs | 40 MiB/s | 40,000 rows/s | 30000 | +| 16 RCUs | 45 MiB/s | 80,000 rows/s | 60000 | + +For more information about the prices of Data Migration RCUs, see [Data Migration Cost](https://www.pingcap.com/tidb-dedicated-pricing-details/#dm-cost). Note that all the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. From 4ac4e1c7ed464d5837a3c4210793031d8d6f2dc3 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 24 Nov 2023 16:29:19 +0800 Subject: [PATCH 4/9] Update tidb-cloud/changefeed-sink-to-apache-kafka.md --- tidb-cloud/changefeed-sink-to-apache-kafka.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md index 3fa4065f88195..f7580d4d904ec 100644 --- a/tidb-cloud/changefeed-sink-to-apache-kafka.md +++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md @@ -66,7 +66,7 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources ## Step 2. Configure the changefeed target 1. Under **Brokers Configuration**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints. -2. Select the authentication option according to your Kafka authentication configuration. +2. Select an authentication option according to your Kafka authentication configuration. - If your Kafka does not require authentication, keep the default option **Disable**. - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the user name and password of your Kafka account for authentication. From 37a75e100d59954638b1cdd2321639cb5a1aea4b Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 24 Nov 2023 16:56:21 +0800 Subject: [PATCH 5/9] add a note for DM --- tidb-cloud/tidb-cloud-billing-dm.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index 386903245e753..c255f3c6afbce 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -11,9 +11,9 @@ This document describes the billing for Data Migration in TiDB Cloud. TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units (RCUs). When you create a Data Migration job, you can select an appropriate specification. The higher the RCU, the better the migration performance. You will be charged for these Data Migration RCUs. -The following table lists the specifications and corresponding performances for Data Migration. +The following table lists the specifications, corresponding performances, and recommended maximum number of tables for Data Migration. -| Specification | Full data migration | Incremental data migration | Supported maximum number of tables | +| Specification | Full data migration | Incremental data migration | Recommended maximum number of tables | |---------------|---------------------|----------------------------|-----------------------| | 2 RCUs | 25 MiB/s | 10,000 rows/s | 500 | | 4 RCUs | 35 MiB/s | 20,000 rows/s | 10000 | @@ -22,7 +22,10 @@ The following table lists the specifications and corresponding performances for For more information about the prices of Data Migration RCUs, see [Data Migration Cost](https://www.pingcap.com/tidb-dedicated-pricing-details/#dm-cost). -Note that all the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. +> **Note:** +> +> - If the number of tables exceeds the recommended maximum number of tables, the Data Migration job can still run, but the job might become unstable or even fail. +> - All the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. The Data Migration job measures full data migration performance in MiB/s. This unit indicates the amount of data (in MiB) that is migrated per second by the Data Migration job. From 5eecb09b8a5c68ae2665f851bd832eec16af0e30 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 24 Nov 2023 17:02:44 +0800 Subject: [PATCH 6/9] deleted "recommended" --- tidb-cloud/tidb-cloud-billing-dm.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index c255f3c6afbce..253491a7f58a8 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -13,7 +13,7 @@ TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units The following table lists the specifications, corresponding performances, and recommended maximum number of tables for Data Migration. -| Specification | Full data migration | Incremental data migration | Recommended maximum number of tables | +| Specification | Full data migration | Incremental data migration | Maximum number of tables | |---------------|---------------------|----------------------------|-----------------------| | 2 RCUs | 25 MiB/s | 10,000 rows/s | 500 | | 4 RCUs | 35 MiB/s | 20,000 rows/s | 10000 | @@ -24,7 +24,7 @@ For more information about the prices of Data Migration RCUs, see [Data Migratio > **Note:** > -> - If the number of tables exceeds the recommended maximum number of tables, the Data Migration job can still run, but the job might become unstable or even fail. +> - If the number of tables exceeds the maximum number of tables, the Data Migration job can still run, but the job might become unstable or even fail. > - All the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. The Data Migration job measures full data migration performance in MiB/s. This unit indicates the amount of data (in MiB) that is migrated per second by the Data Migration job. From e05a8dfed317140ec99f93681330099d86324cbd Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Fri, 24 Nov 2023 17:03:05 +0800 Subject: [PATCH 7/9] Update tidb-cloud-billing-dm.md --- tidb-cloud/tidb-cloud-billing-dm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index 253491a7f58a8..7dae94c17eb44 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -11,7 +11,7 @@ This document describes the billing for Data Migration in TiDB Cloud. TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units (RCUs). When you create a Data Migration job, you can select an appropriate specification. The higher the RCU, the better the migration performance. You will be charged for these Data Migration RCUs. -The following table lists the specifications, corresponding performances, and recommended maximum number of tables for Data Migration. +The following table lists the specifications, corresponding performances, and the maximum number of tables for Data Migration. | Specification | Full data migration | Incremental data migration | Maximum number of tables | |---------------|---------------------|----------------------------|-----------------------| From 3855328604401d2f36169bda1d20499efb5d14b6 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 24 Nov 2023 17:35:33 +0800 Subject: [PATCH 8/9] Apply suggestions from code review Co-authored-by: Frank945946 <108602632+Frank945946@users.noreply.github.com> --- tidb-cloud/tidb-cloud-billing-dm.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index 7dae94c17eb44..d48ef55418695 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -11,7 +11,7 @@ This document describes the billing for Data Migration in TiDB Cloud. TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units (RCUs). When you create a Data Migration job, you can select an appropriate specification. The higher the RCU, the better the migration performance. You will be charged for these Data Migration RCUs. -The following table lists the specifications, corresponding performances, and the maximum number of tables for Data Migration. +The following table lists the corresponding performances and the maximum number of tables that can be migrated by each Data Migration specification. | Specification | Full data migration | Incremental data migration | Maximum number of tables | |---------------|---------------------|----------------------------|-----------------------| @@ -24,7 +24,7 @@ For more information about the prices of Data Migration RCUs, see [Data Migratio > **Note:** > -> - If the number of tables exceeds the maximum number of tables, the Data Migration job can still run, but the job might become unstable or even fail. +> - If the number of tables to be migrated exceeds the maximum number of tables, the Data Migration job might can still run, but the job might become unstable or even fail. > - All the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. The Data Migration job measures full data migration performance in MiB/s. This unit indicates the amount of data (in MiB) that is migrated per second by the Data Migration job. From 1406a8b6a2edcaa9956283e8c9df96e98d5ff29b Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 24 Nov 2023 17:51:43 +0800 Subject: [PATCH 9/9] Apply suggestions from code review Co-authored-by: Lilian Lee --- tidb-cloud/tidb-cloud-billing-dm.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tidb-cloud/tidb-cloud-billing-dm.md b/tidb-cloud/tidb-cloud-billing-dm.md index d48ef55418695..ab7dbb0a4f3ed 100644 --- a/tidb-cloud/tidb-cloud-billing-dm.md +++ b/tidb-cloud/tidb-cloud-billing-dm.md @@ -11,7 +11,7 @@ This document describes the billing for Data Migration in TiDB Cloud. TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units (RCUs). When you create a Data Migration job, you can select an appropriate specification. The higher the RCU, the better the migration performance. You will be charged for these Data Migration RCUs. -The following table lists the corresponding performances and the maximum number of tables that can be migrated by each Data Migration specification. +The following table lists the corresponding performance and the maximum number of tables that each Data Migration specification can migrate. | Specification | Full data migration | Incremental data migration | Maximum number of tables | |---------------|---------------------|----------------------------|-----------------------| @@ -24,8 +24,8 @@ For more information about the prices of Data Migration RCUs, see [Data Migratio > **Note:** > -> - If the number of tables to be migrated exceeds the maximum number of tables, the Data Migration job might can still run, but the job might become unstable or even fail. -> - All the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. +> - If the number of tables to be migrated exceeds the maximum number of tables, the Data Migration job might still run, but the job could become unstable or even fail. +> - All the performance values in this table are maximum and optimal ones. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios. The Data Migration job measures full data migration performance in MiB/s. This unit indicates the amount of data (in MiB) that is migrated per second by the Data Migration job.