Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix 2 issues #16183

Merged
merged 2 commits into from
Jan 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion br/backup-and-restore-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Corresponding to the backup features, you can perform two types of restore: full
#### Restore performance and impact on TiDB clusters

- Data restore is performed at a scalable speed. Generally, the speed is 100 MiB/s per TiKV node. `br` only supports restoring data to a new cluster and uses the resources of the target cluster as much as possible. For more details, see [Restore performance and impact](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore).
- On each TiKV node, PITR can restore log data at 30 GiB/h. For more details, see [PITR performance and impact](/br/br-pitr-guide.md#performance-and-impact-of-pitr).
- On each TiKV node, PITR can restore log data at 30 GiB/h. For more details, see [PITR performance and impact](/br/br-pitr-guide.md#performance-capabilities-of-pitr).

## Backup storage

Expand Down
8 changes: 3 additions & 5 deletions br/br-pitr-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,9 +103,7 @@ The following steps describe how to clean up backup data that exceeds the backup
rm -rf s3://backup-101/snapshot-${date}
```

## Performance and impact of PITR

### Capabilities
## Performance capabilities of PITR

- On each TiKV node, PITR can restore snapshot data at a speed of 280 GB/h and log data 30 GB/h.
- BR deletes outdated log backup data at a speed of 600 GB/h.
Expand All @@ -121,7 +119,7 @@ The following steps describe how to clean up backup data that exceeds the backup
> The default replica number for all clusters in the test is 3.
> To improve the overall restore performance, you can modify the [`import.num-threads`](/tikv-configuration-file.md#import) item in the TiKV configuration file and the [`concurrency`](/br/use-br-command-line-tool.md#common-options) option in the BR command.

Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)):
Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)) is as follows:

- The number of TiKV nodes (8 core, 16 GB memory): 21
- TiKV configuration item `import.num-threads`: 8
Expand All @@ -130,7 +128,7 @@ Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)):
- New log data created in the cluster: 10 GB/h
- Write (INSERT/UPDATE/DELETE) QPS: 10,000

Testing scenario 2 (on TiDB Self-Hosted):
Testing scenario 2 (on TiDB Self-Hosted) is as follows:

- The number of TiKV nodes (8 core, 64 GB memory): 6
- TiKV configuration item `import.num-threads`: 8
Expand Down
2 changes: 1 addition & 1 deletion dr-solution-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Of course, if the error tolerance objective is multiple regions and RPO must be

In this architecture, TiDB cluster 1 is deployed in region 1. BR regularly backs up the data of cluster 1 to region 2, and continuously backs up the data change logs of this cluster to region 2 as well. When region 1 encounters a disaster and cluster 1 cannot be recovered, you can use the backup data and data change logs to restore a new cluster (cluster 2) in region 2 to provide services.

The DR solution based on BR provides an RPO lower than 5 minutes and an RTO that varies with the size of the data to be restored. For BR v6.5.0, you can refer to [Performance and impact of snapshot restore](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) and [Performance and impact of PITR](/br/br-pitr-guide.md#performance-and-impact-of-pitr) to learn about the restore speed. Usually, the feature of backup across regions is considered the last resort of data security and also a must-have solution for most systems. For more information about this solution, see [DR solution based on BR](/dr-backup-restore.md).
The DR solution based on BR provides an RPO lower than 5 minutes and an RTO that varies with the size of the data to be restored. For BR v6.5.0, you can refer to [Performance and impact of snapshot restore](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) and [Performance and impact of PITR](/br/br-pitr-guide.md#performance-capabilities-of-pitr) to learn about the restore speed. Usually, the feature of backup across regions is considered the last resort of data security and also a must-have solution for most systems. For more information about this solution, see [DR solution based on BR](/dr-backup-restore.md).

Meanwhile, starting from v6.5.0, BR supports [restoring a TiDB cluster from EBS volume snapshots](https://docs.pingcap.com/tidb-in-kubernetes/stable/restore-from-aws-s3-by-snapshot). If your cluster is running on Kubernetes and you want to restore the cluster as fast as possible without affecting the cluster, you can use this feature to reduce the RTO of your system.

Expand Down
14 changes: 7 additions & 7 deletions ticdc/ticdc-open-protocol.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

+ **Key:**

```
```json
{
"ts":<TS>,
"scm":<Schema Name>,
Expand All @@ -69,7 +69,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

`Insert` event. The newly added row data is output.

```
```json
{
"u":{
<Column Name>:{
Expand All @@ -90,7 +90,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

`Update` event. The newly added row data ("u") and the row data before the update ("p") are output.

```
```json
{
"u":{
<Column Name>:{
Expand Down Expand Up @@ -125,7 +125,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

`Delete` event. The deleted row data is output.

```
```json
{
"d":{
<Column Name>:{
Expand Down Expand Up @@ -156,7 +156,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

+ **Key:**

```
```json
{
"ts":<TS>,
"scm":<Schema Name>,
Expand All @@ -173,7 +173,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

+ **Value:**

```
```json
{
"q":<DDL Query>,
"t":<DDL Type>
Expand All @@ -189,7 +189,7 @@ This section introduces the formats of Row Changed Event, DDL Event, and Resolve

+ **Key:**

```
```json
{
"ts":<TS>,
"t":3
Expand Down
Loading