Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adjust the recommend value of raft election-timeout in multi dc deplo… #16561

Merged
merged 9 commits into from
Feb 22, 2024
4 changes: 2 additions & 2 deletions config-templates/geo-redundancy-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,8 @@ tikv_servers:
host: host1
readpool.storage.use-unified-pool: true
readpool.storage.low-concurrency: 10
raftstore.raft-min-election-timeout-ticks: 1000
raftstore.raft-max-election-timeout-ticks: 1020
raftstore.raft-min-election-timeout-ticks: 50
raftstore.raft-max-election-timeout-ticks: 60
monitoring_servers:
- host: 10.0.1.16
grafana_servers:
Expand Down
4 changes: 2 additions & 2 deletions dr-multi-replica.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ In this example, TiDB contains five replicas and three regions. Region 1 is the
config:
server.labels: { Region: "Region3", AZ: "AZ5" }

raftstore.raft-min-election-timeout-ticks: 1000
raftstore.raft-max-election-timeout-ticks: 1200
raftstore.raft-min-election-timeout-ticks: 50
raftstore.raft-max-election-timeout-ticks: 60

monitoring_servers:
- host: tidb-dr-test2
Expand Down
4 changes: 2 additions & 2 deletions geo-distributed-deployment-topology.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ This section describes the key parameter configuration of the TiDB geo-distribut
- To prevent remote TiKV nodes from launching unnecessary Raft elections, it is required to increase the minimum and maximum number of ticks that the remote TiKV nodes need to launch an election. The two parameters are set to `0` by default.

```yaml
raftstore.raft-min-election-timeout-ticks: 1000
raftstore.raft-max-election-timeout-ticks: 1020
raftstore.raft-min-election-timeout-ticks: 50
raftstore.raft-max-election-timeout-ticks: 60
```

hfxsd marked this conversation as resolved.
Show resolved Hide resolved
#### PD parameters
hfxsd marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
13 changes: 9 additions & 4 deletions three-data-centers-in-two-cities-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,8 +114,8 @@ tikv_servers:
- host: 10.63.10.34
config:
server.labels: { az: "3", replication zone: "5", rack: "5", host: "34" }
raftstore.raft-min-election-timeout-ticks: 1000
raftstore.raft-max-election-timeout-ticks: 1200
raftstore.raft-min-election-timeout-ticks: 50
raftstore.raft-max-election-timeout-ticks: 60

monitoring_servers:
- host: 10.63.10.60
Expand Down Expand Up @@ -175,10 +175,15 @@ In the deployment of three AZs in two regions, to optimize performance, you need
- Optimize the network configuration of the TiKV node in another region (San Francisco). Modify the following TiKV parameters for AZ3 in San Francisco and try to prevent the replica in this TiKV node from participating in the Raft election.

```yaml
raftstore.raft-min-election-timeout-ticks: 1000
raftstore.raft-max-election-timeout-ticks: 1200
raftstore.raft-min-election-timeout-ticks: 50
raftstore.raft-max-election-timeout-ticks: 60
```

> Note:
hfxsd marked this conversation as resolved.
Show resolved Hide resolved
>
> Setting larger values for `raftstore.raft-min-election-timeout-ticks` and `raftstore.raft-max-election-timeout-ticks` can significantly decrease the likelihood of peers on the target TiKV instance becoming region leaders. However, in a disaster scenario where some TiKV instances are offline and the peers on other instances' Raft logs have fallen behind, it is possible that only the peer on the affected TiKV instance can become a region leader. This peer requires at least raftstore.raft-min-election-timeout-ticks seconds to start a campaign. Therefore, users should avoid setting excessively high configuration values to prevent impacting the cluster's availability in this extreme scenario.

hfxsd marked this conversation as resolved.
Show resolved Hide resolved

- Configure scheduling. After the cluster is enabled, use the `tiup ctl:v<CLUSTER_VERSION> pd` tool to modify the scheduling policy. Modify the number of TiKV Raft replicas. Configure this number as planned. In this example, the number of replicas is five.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

```bash
Expand Down
Loading