From be0dd8217cf8bd407acd2a777545a5594f3410f9 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Thu, 18 Jan 2024 16:21:00 +0800 Subject: [PATCH 1/2] fix 2 issues --- br/br-pitr-guide.md | 8 +++----- ticdc/ticdc-open-protocol.md | 14 +++++++------- 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/br/br-pitr-guide.md b/br/br-pitr-guide.md index 56426e8733cf6..76f2f4b704702 100644 --- a/br/br-pitr-guide.md +++ b/br/br-pitr-guide.md @@ -103,9 +103,7 @@ The following steps describe how to clean up backup data that exceeds the backup rm -rf s3://backup-101/snapshot-${date} ``` -## Performance and impact of PITR - -### Capabilities +## Performance capabilities of PITR - On each TiKV node, PITR can restore snapshot data at a speed of 280 GB/h and log data 30 GB/h. - BR deletes outdated log backup data at a speed of 600 GB/h. @@ -121,7 +119,7 @@ The following steps describe how to clean up backup data that exceeds the backup > The default replica number for all clusters in the test is 3. > To improve the overall restore performance, you can modify the [`import.num-threads`](/tikv-configuration-file.md#import) item in the TiKV configuration file and the [`concurrency`](/br/use-br-command-line-tool.md#common-options) option in the BR command. -Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)): +Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)) is as follows: - The number of TiKV nodes (8 core, 16 GB memory): 21 - TiKV configuration item `import.num-threads`: 8 @@ -130,7 +128,7 @@ Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)): - New log data created in the cluster: 10 GB/h - Write (INSERT/UPDATE/DELETE) QPS: 10,000 -Testing scenario 2 (on TiDB Self-Hosted): +Testing scenario 2 (on TiDB Self-Hosted) is as follows: - The number of TiKV nodes (8 core, 64 GB memory): 6 - TiKV configuration item `import.num-threads`: 8 diff --git a/ticdc/ticdc-open-protocol.md b/ticdc/ticdc-open-protocol.md index 54ad9facf27bf..7e345531e09b6 100644 --- a/ticdc/ticdc-open-protocol.md +++ b/ticdc/ticdc-open-protocol.md @@ -50,7 +50,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve + **Key:** - ``` + ```json { "ts":, "scm":, @@ -69,7 +69,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve `Insert` event. The newly added row data is output. - ``` + ```json { "u":{ :{ @@ -90,7 +90,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve `Update` event. The newly added row data ("u") and the row data before the update ("p") are output. - ``` + ```json { "u":{ :{ @@ -125,7 +125,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve `Delete` event. The deleted row data is output. - ``` + ```json { "d":{ :{ @@ -156,7 +156,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve + **Key:** - ``` + ```json { "ts":, "scm":, @@ -173,7 +173,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve + **Value:** - ``` + ```json { "q":, "t": @@ -189,7 +189,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve + **Key:** - ``` + ```json { "ts":, "t":3 From c97cec72e95cb10ff76a2b2510f697f518663caa Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Thu, 18 Jan 2024 16:29:59 +0800 Subject: [PATCH 2/2] fix links --- br/backup-and-restore-overview.md | 2 +- dr-solution-introduction.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/br/backup-and-restore-overview.md b/br/backup-and-restore-overview.md index 0f4753bbf6d86..41e51f8e935ee 100644 --- a/br/backup-and-restore-overview.md +++ b/br/backup-and-restore-overview.md @@ -94,7 +94,7 @@ Corresponding to the backup features, you can perform two types of restore: full #### Restore performance and impact on TiDB clusters - Data restore is performed at a scalable speed. Generally, the speed is 100 MiB/s per TiKV node. `br` only supports restoring data to a new cluster and uses the resources of the target cluster as much as possible. For more details, see [Restore performance and impact](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore). -- On each TiKV node, PITR can restore log data at 30 GiB/h. For more details, see [PITR performance and impact](/br/br-pitr-guide.md#performance-and-impact-of-pitr). +- On each TiKV node, PITR can restore log data at 30 GiB/h. For more details, see [PITR performance and impact](/br/br-pitr-guide.md#performance-capabilities-of-pitr). ## Backup storage diff --git a/dr-solution-introduction.md b/dr-solution-introduction.md index ca3a80cc91e0c..dd70a150d0c78 100644 --- a/dr-solution-introduction.md +++ b/dr-solution-introduction.md @@ -93,7 +93,7 @@ Of course, if the error tolerance objective is multiple regions and RPO must be In this architecture, TiDB cluster 1 is deployed in region 1. BR regularly backs up the data of cluster 1 to region 2, and continuously backs up the data change logs of this cluster to region 2 as well. When region 1 encounters a disaster and cluster 1 cannot be recovered, you can use the backup data and data change logs to restore a new cluster (cluster 2) in region 2 to provide services. -The DR solution based on BR provides an RPO lower than 5 minutes and an RTO that varies with the size of the data to be restored. For BR v6.5.0, you can refer to [Performance and impact of snapshot restore](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) and [Performance and impact of PITR](/br/br-pitr-guide.md#performance-and-impact-of-pitr) to learn about the restore speed. Usually, the feature of backup across regions is considered the last resort of data security and also a must-have solution for most systems. For more information about this solution, see [DR solution based on BR](/dr-backup-restore.md). +The DR solution based on BR provides an RPO lower than 5 minutes and an RTO that varies with the size of the data to be restored. For BR v6.5.0, you can refer to [Performance and impact of snapshot restore](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) and [Performance and impact of PITR](/br/br-pitr-guide.md#performance-capabilities-of-pitr) to learn about the restore speed. Usually, the feature of backup across regions is considered the last resort of data security and also a must-have solution for most systems. For more information about this solution, see [DR solution based on BR](/dr-backup-restore.md). Meanwhile, starting from v6.5.0, BR supports [restoring a TiDB cluster from EBS volume snapshots](https://docs.pingcap.com/tidb-in-kubernetes/stable/restore-from-aws-s3-by-snapshot). If your cluster is running on Kubernetes and you want to restore the cluster as fast as possible without affecting the cluster, you can use this feature to reduce the RTO of your system.