Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cloud: refine the steps for configuring the changefeed target #15461

Merged
merged 10 commits into from
Nov 24, 2023
10 changes: 5 additions & 5 deletions tidb-cloud/changefeed-sink-to-apache-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,14 +66,14 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources
## Step 2. Configure the changefeed target

1. Under **Brokers Configuration**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
2. Select your Kafka version. If you do not know that, use Kafka V2.
3. Select a desired compression type for the data in this changefeed.
4. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
5. Select the **Authentication** option according to your Kafka authentication configuration.
2. Select an authentication option according to your Kafka authentication configuration.

- If your Kafka does not require authentication, keep the default option **DISABLE**.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the user name and password of your Kafka account for authentication.

3. Select your Kafka version. If you do not know that, use Kafka V2.
4. Select a desired compression type for the data in this changefeed.
5. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
6. Click **Next** to check the configurations you set and go to the next page.

## Step 3. Set the changefeed
Expand Down
21 changes: 13 additions & 8 deletions tidb-cloud/tidb-cloud-billing-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,21 @@ This document describes the billing for Data Migration in TiDB Cloud.

TiDB Cloud measures the capacity of Data Migration in Replication Capacity Units (RCUs). When you create a Data Migration job, you can select an appropriate specification. The higher the RCU, the better the migration performance. You will be charged for these Data Migration RCUs.

The following table lists the specifications and corresponding performances for Data Migration.
The following table lists the corresponding performance and the maximum number of tables that each Data Migration specification can migrate.

| Specification | Full data migration | Incremental data migration |
|---------------|---------------------|----------------------------|
| 2 RCUs | 25 MiB/s | 10,000 rows/s|
| 4 RCUs | 35 MiB/s | 20,000 rows/s|
| 8 RCUs | 40 MiB/s | 40,000 rows/s|
| 16 RCUs | 45 MiB/s | 80,000 rows/s|
| Specification | Full data migration | Incremental data migration | Maximum number of tables |
|---------------|---------------------|----------------------------|-----------------------|
| 2 RCUs | 25 MiB/s | 10,000 rows/s | 500 |
| 4 RCUs | 35 MiB/s | 20,000 rows/s | 10000 |
| 8 RCUs | 40 MiB/s | 40,000 rows/s | 30000 |
| 16 RCUs | 45 MiB/s | 80,000 rows/s | 60000 |

Note that all the performance values in this table are maximum performances. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios.
For more information about the prices of Data Migration RCUs, see [Data Migration Cost](https://www.pingcap.com/tidb-dedicated-pricing-details/#dm-cost).

> **Note:**
>
> - If the number of tables to be migrated exceeds the maximum number of tables, the Data Migration job might still run, but the job could become unstable or even fail.
> - All the performance values in this table are maximum and optimal ones. It is assumed that there are no performance, network bandwidth, or other bottlenecks in the upstream and downstream databases. The performance values are for reference only and might vary in different scenarios.

The Data Migration job measures full data migration performance in MiB/s. This unit indicates the amount of data (in MiB) that is migrated per second by the Data Migration job.

Expand Down
Loading