Skip to content

Commit

Permalink
br: remove --tikv-max-restore-concurrency (#16314)
Browse files Browse the repository at this point in the history
  • Loading branch information
Oreoxmt authored Jan 25, 2024
1 parent 6478fea commit 533c83e
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 6 deletions.
3 changes: 1 addition & 2 deletions br/br-snapshot-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,15 +208,14 @@ The impact of backup on cluster performance can be reduced by limiting the backu

- During data restore, TiDB tries to fully utilize the TiKV CPU, disk IO, and network bandwidth resources. Therefore, it is recommended to restore the backup data on an empty cluster to avoid affecting the running applications.
- The speed of restoring backup data is much related with the cluster configuration, deployment, and running applications. In internal tests, the restore speed of a single TiKV node can reach 100 MiB/s. The performance and impact of snapshot restore are varied in different user scenarios and should be tested in actual environments.
- Starting from v7.6.0, to accelerate restore speed in large-scale Region scenarios, BR introduces an experimental feature that allows you to enable a coarse-grained Region scatter algorithm by specifying the command-line parameter `--granularity="coarse-grained"`. You can also control the concurrency of download tasks for each TiKV node by setting the `--tikv-max-restore-concurrency` parameter. This helps you fully utilize the resources of each TiKV node, thereby achieving a rapid parallel recovery. In several real-world cases, the snapshot restore speed of the cluster is improved by about 10 times in large-scale Region scenarios. The following is an example:
- Starting from v7.6.0, to accelerate restore speed in large-scale Region scenarios, BR introduces an experimental feature that allows you to enable a coarse-grained Region scatter algorithm by specifying the command-line parameter `--granularity="coarse-grained"`. This algorithm ensures that each TiKV node receives stable and evenly distributed download tasks, thus fully utilizing the resources of each TiKV node and achieving a rapid parallel recovery. In several real-world cases, the snapshot restore speed of the cluster is improved by about 10 times in large-scale Region scenarios. The following is an example:

```bash
br restore full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}" \
--s3.region "${region}" \
--granularity "coarse-grained" \
--tikv-max-restore-concurrency 128 \
--send-credentials-to-tikv=true \
--log-file restorefull.log
```
Expand Down
6 changes: 2 additions & 4 deletions releases/release-7.6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,16 +69,15 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v7.6/quick-start-with-

As a TiDB cluster scales up, it becomes increasingly crucial to quickly restore the cluster from failures to minimize business downtime. Before v7.6.0, the Region scattering algorithm is a primary bottleneck in performance restoration. In v7.6.0, BR optimizes the Region scattering algorithm, which quickly splits the restore task into a large number of small tasks and scatters them to all TiKV nodes in batches. The new parallel recovery algorithm fully utilizes the resources of each TiKV node, thereby achieving a rapid parallel recovery. In several real-world cases, the snapshot restore speed of the cluster is improved by about 10 times in large-scale Region scenarios.

The new coarse-grained Region scatter algorithm is experimental. To use it, you can configure the `--granularity="coarse-grained"` parameter in the `br` command. You can also control the concurrency of download tasks on each TiKV node by setting the `--tikv-max-restore-concurrency` parameter. For example:
The new coarse-grained Region scatter algorithm is experimental. To use it, you can configure the `--granularity="coarse-grained"` parameter in the `br` command. For example:

```bash
br restore full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}" \
--s3.region "${region}" \
--granularity "coarse-grained" \
--tikv-max-restore-concurrency 128 \
--send-credentials-to-tikv=true \
--send-credentials-to-tikv=true \
--log-file restorefull.log
```

Expand Down Expand Up @@ -293,7 +292,6 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v7.6/quick-start-with-
| TiDB Lightning| [`tidb.pd-addr`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Modified | Configures the addresses of the PD Servers. Starting from v7.6.0, TiDB supports setting multiple PD addresses. |
| TiDB Lightning | [`block-size`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) | Newly added | Controls the I/O block size for sorting local files in Physical Import Mode (`backend='local'`). The default value is `16KiB`. When the disk IOPS is a bottleneck, you can increase this value to improve performance. |
| BR | [`--granularity`](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) | Newly added | Uses the coarse-grained Region scatter algorithm (experimental) by specifying `--granularity="coarse-grained"`. This accelerates restore speed in large-scale Region scenarios. |
| BR | [`--tikv-max-restore-concurrency`](/br/br-snapshot-guide.md#performance-and-impact-of-snapshot-restore) | Newly added | Controls the concurrency of download tasks for each TiKV node when using the coarse-grained Region scatter algorithm. |
| TiCDC | [`compression`](/ticdc/ticdc-changefeed-config.md) | Newly added | Controls the behavior to compress redo log files. |
| TiCDC | [`encoding-worker-num`](/ticdc/ticdc-changefeed-config.md) | Newly added | Controls the number of encoding and decoding workers in the redo module. The default value is `16`. |
| TiCDC | [`flush-worker-num`](/ticdc/ticdc-changefeed-config.md) | Newly added | Controls the number of flushing workers in the redo module. The default value is `8`. |
Expand Down

0 comments on commit 533c83e

Please sign in to comment.