Skip to content

Commit

Permalink
Preview PR pingcap/docs#19727 and this preview is triggered from commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Docsite Preview Bot committed Dec 23, 2024
1 parent 3687dcd commit 25949b4
Showing 1 changed file with 9 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This document outlines the performance improvements in v8.5.0 across the followi

## General performance

With the default Region size increased from 96 MiB to 256 MiB and some other improvements in v8.5, significant performance improvements were observed:
With the default Region size increased from 96 MiB to 256 MiB and some other improvements in v8.5.0, significant performance improvements are observed:

- `oltp_insert` performance improves by 27%.
- `Analyze` performance shows a major boost of approximately 45%.
Expand Down Expand Up @@ -68,16 +68,16 @@ In cloud environments, transient or sustained IO latency fluctuations on cloud d
### Solution
TiDB v8.5 introduces multiple enhancements to address the impact of cloud disk IO jitter on performance. These improvements effectively mitigate the effects of IO latency fluctuations, ensuring more stable and reliable performance.
TiDB v8.5.0 introduces multiple enhancements to address the impact of cloud disk IO jitter on performance. These improvements effectively mitigate the effects of IO latency fluctuations, ensuring more stable and reliable performance.
### Test environment
- Cluster topology: TiDB (32 vCPU, 64 GiB) \* 3 + TiKV (32 vCPU, 64 GiB) \* 6
- Workload: a read/write ratio of 2:1, with simulated cloud disk IO delays or hangs on one of TiKV nodes
- Workload: a read/write ratio of 2:1, with simulated cloud disk IO delays or hangs on one TiKV node
### Test results
The failover time of the IO latency jitter is 30% shorter, and P99/999 latency is reduced by 70% or higher.
The failover time of the IO latency jitter is 30% shorter, and P99/999 latency is reduced by 70% or more.
- Test results without IO latency jitter improvement
Expand Down Expand Up @@ -109,7 +109,7 @@ The failover time of the IO latency jitter is 30% shorter, and P99/999 latency i
Large-scale transactions, such as bulk data updates, system migrations, and ETL workflows, involve processing millions of rows and are vital for supporting critical operations. While TiDB excels as a distributed SQL database, handling such transactions at scale presents two significant challenges:
- Memory limits: in releases earlier than TiDB 8.1, all transaction mutations are held in memory throughout the transaction lifecycle, which strains resources and reduces performance. For operations involving millions of rows, this could lead to excessive memory usage and, in some cases, Out of Memory (OOM) errors when resources are insufficient.
- Memory limits: in versions earlier than TiDB v8.1.0, all transaction mutations are held in memory throughout the transaction lifecycle, which strains resources and reduces performance. For operations involving millions of rows, this could lead to excessive memory usage and, in some cases, Out of Memory (OOM) errors when resources are insufficient.
- Performance slowdowns: managing large in-memory buffers relies on red-black trees, which introduces computational overhead. As buffers grew, their operations slowed down due to the *O(N log N)* complexity inherent in these data structures.
Expand Down Expand Up @@ -149,15 +149,15 @@ The execution speed increases by 2x, the maximum TiDB memory usage decreases by
Horizontal scaling is a core capability of TiKV, enabling the system to scale in or out as needed. As business demands grow and the number of tenants increases, TiDB clusters experience rapid growth in databases, tables, and data volume. Scaling out TiKV nodes quickly becomes essential to maintaining service quality.
In some scenarios, TiDB hosts a large number of databases and tables. When these tables are small or empty, TiKV accumulates a significant number of tiny regions, especially when the number of tables grows to a large scale (such as 1 million or more). These small regions introduce a substantial maintenance burden, increase resource overhead, and reduce efficiency.
In some scenarios, TiDB hosts a large number of databases and tables. When these tables are small or empty, TiKV accumulates a significant number of tiny Regions, especially when the number of tables grows to a large scale (such as 1 million or more). These small Regions introduce a substantial maintenance burden, increase resource overhead, and reduce efficiency.
### Solution
To address this issue, TiDB v8.5 improves the performance of merging small regions, reducing internal overhead and improving resource utilization. Additionally, TiDB v8.5 includes several other enhancements to further improve TiKV scaling performance.
To address this issue, TiDB v8.5.0 improves the performance of merging small Regions, reducing internal overhead and improving resource utilization. Additionally, TiDB v8.5.0 includes several other enhancements to further improve TiKV scaling performance.
### Test environment
#### Merging small regions
#### Merging small Regions
- Cluster topology: TiDB (16 vCPU, 32 GiB) \* 1 + TiKV (16 vCPU, 32 GiB) \* 3
- Dataset: nearly 1 million small tables, with the size of each table < 2 MiB
Expand Down Expand Up @@ -185,7 +185,7 @@ TiKV scaling performance improves by over 40%, and TiKV node scaling out duratio
## Benchmark
In addition to the preceding test data, you can refer to the following benchmark results for v8.5 performance:
In addition to the preceding test data, you can refer to the following benchmark results for v8.5.0 performance:
- [TPC-C performance test report](/tidb-cloud/v8.5-performance-benchmarking-with-tpcc.md)
- [Sysbench performance test report](/tidb-cloud/v8.5-performance-benchmarking-with-sysbench.md)

0 comments on commit 25949b4

Please sign in to comment.