Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
  • Loading branch information
hfxsd authored Feb 22, 2024
1 parent 797a395 commit 8a1fe44
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion geo-distributed-deployment-topology.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ This section describes the key parameter configuration of the TiDB geo-distribut
```
> **Note:**
>
> Using `raftstore.raft-min-election-timeout-ticks` and `raftstore.raft-max-election-timeout-ticks` to configure larger tick values for a TiKV node can significantly decrease the likelihood of a peer on the TiKV node becoming the leader. However, in a disaster scenario where some TiKV nodes are offline and the active TiKV node lags behind in Raft logs, only the Region on this specific TiKV node with large tick values can become the leader. In the event that the leader becomes unavailable, the Region on this TiKV node must wait for at least the duration set by `raftstore.raft-min-election-timeout-ticks' to become a new leader. It is advisable not to set these values excessively large to prevent potential impact on the cluster availability in such scenarios.
> Using `raftstore.raft-min-election-timeout-ticks` and `raftstore.raft-max-election-timeout-ticks` to configure larger election timeout ticks for a TiKV node can significantly decrease the likelihood of a peer on the TiKV node becoming the leader. However, in a disaster scenario where some TiKV nodes are offline and the active TiKV node lags behind in Raft logs, only the Region on this specific TiKV node with large election timeout ticks can become the leader. In the event that the leader becomes unavailable, the Region on this TiKV node must wait for at least the duration set by `raftstore.raft-min-election-timeout-ticks' to become a new leader. It is advisable not to set these values excessively large to prevent potential impact on the cluster availability in such scenarios.
#### PD parameters

- The PD metadata information records the topology of the TiKV cluster. PD schedules the Raft Group replicas on the following four dimensions:
Expand Down
2 changes: 1 addition & 1 deletion three-data-centers-in-two-cities-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ In the deployment of three AZs in two regions, to optimize performance, you need

> **Note:**
>
> Using `raftstore.raft-min-election-timeout-ticks` and `raftstore.raft-max-election-timeout-ticks` to configure larger tick values for a TiKV node can significantly decrease the likelihood of a peer on the TiKV node becoming the leader. However, in a disaster scenario where some TiKV nodes are offline and the active TiKV node lags behind in Raft logs, only the Region on this specific TiKV node with large tick values can become the leader. In the event that the leader becomes unavailable, the Region on this TiKV node must wait for at least the duration set by `raftstore.raft-min-election-timeout-ticks' to become a new leader. It is advisable not to set these values excessively large to prevent potential impact on the cluster availability in such scenarios.
> Using `raftstore.raft-min-election-timeout-ticks` and `raftstore.raft-max-election-timeout-ticks` to configure larger election timeout ticks for a TiKV node can significantly decrease the likelihood of a peer on the TiKV node becoming the leader. However, in a disaster scenario where some TiKV nodes are offline and the active TiKV node lags behind in Raft logs, only the Region on this specific TiKV node with large election timeout ticks can become the leader. In the event that the leader becomes unavailable, the Region on this TiKV node must wait for at least the duration set by `raftstore.raft-min-election-timeout-ticks' to become a new leader. It is advisable not to set these values excessively large to prevent potential impact on the cluster availability in such scenarios.

- Configure scheduling. After the cluster is enabled, use the `tiup ctl:v<CLUSTER_VERSION> pd` tool to modify the scheduling policy. Modify the number of TiKV Raft replicas. Configure this number as planned. In this example, the number of replicas is five.

Expand Down

0 comments on commit 8a1fe44

Please sign in to comment.