Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: improve a batch of summary in metadata #15810

Merged
merged 2 commits into from
Dec 29, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v2.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0
aliases: ['/docs/dev/benchmark/benchmark-sysbench-v2/','/docs/dev/benchmark/sysbench-v2/']
summary: TiDB 2.0 GA outperforms TiDB 1.0 GA in `Select` and `Insert` tests, with a 10% increase in `Select` query performance and a slight improvement in `Insert` query performance. However, the OLTP performance of both versions is almost the same.
---

# TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v3.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: TiDB Sysbench Performance Test Report -- v2.1 vs. v2.0
aliases: ['/docs/dev/benchmark/benchmark-sysbench-v3/','/docs/dev/benchmark/sysbench-v3/']
summary: TiDB 2.1 outperforms TiDB 2.0 in the `Point Select` test, with a 50% increase in query performance. However, the `Update Non-Index` and `Update Index` tests show similar performance between the two versions. The test was conducted in September 2018 in Beijing, China, using a specific test environment and configuration.
---

# TiDB Sysbench Performance Test Report -- v2.1 vs. v2.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v5-vs-v4.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v5.0 vs. v4.0
summary: TiDB v5.0 outperforms v4.0 in Sysbench performance tests. Point Select performance improved by 2.7%, Update Non-index by 81%, Update Index by 28%, and Read Write by 9%. The test aimed to compare performance in the OLTP scenario using AWS EC2. Hardware and software configurations were specified for both versions. Test plan included data preparation and execution. Test results were presented in tables and graphs.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB Sysbench Performance Test Report -- v5.0 vs. v4.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v5.1.0-vs-v5.0.2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v5.1.0 vs. v5.0.2
summary: TiDB v5.1.0 shows a 19.4% improvement in Point Select performance compared to v5.0.2. However, the Read Write and Update Index performance is slightly reduced in v5.1.0. The test was conducted on AWS EC2 using Sysbench with specific hardware and software configurations. The test plan involved deploying, importing data, and performing stress tests. Overall, v5.1.0 demonstrates improved Point Select performance but reduced performance in other areas.
---

# TiDB Sysbench Performance Test Report -- v5.1.0 vs. v5.0.2
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v5.2.0-vs-v5.1.1.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v5.2.0 vs. v5.1.1
summary: TiDB v5.2.0 shows an 11.03% improvement in Point Select performance compared to v5.1.1. However, other scenarios show a slight reduction in performance. The hardware and software configurations, test plan, and results are detailed in the report.
---

# TiDB Sysbench Performance Test Report -- v5.2.0 vs. v5.1.1
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v5.3.0-vs-v5.2.2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v5.3.0 vs. v5.2.2
summary: TiDB v5.3.0 and v5.2.2 were compared in a Sysbench performance test for Online Transactional Processing (OLTP). Results show that v5.3.0 performance is nearly the same as v5.2.2. Point Select performance of v5.3.0 is reduced by 0.81%, Update Non-index performance is improved by 0.95%, Update Index performance is improved by 1.83%, and Read Write performance is reduced by 0.62%.
---

# TiDB Sysbench Performance Test Report -- v5.3.0 vs. v5.2.2
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v5.4.0-vs-v5.3.0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v5.4.0 vs. v5.3.0
summary: TiDB v5.4.0 shows improved performance of 2.59% to 4.85% in write-heavy workloads compared to v5.3.0. The test environment includes AWS EC2 with specific hardware and software configurations. The test plan involves deploying TiDB, using Sysbench to import tables, and performing stress tests. Results show performance improvements in point select, update non-index, update index, and read write scenarios.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB Sysbench Performance Test Report -- v5.4.0 vs. v5.3.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v6.0.0-vs-v5.4.0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v6.0.0 vs. v5.4.0
summary: TiDB v6.0.0 shows a 16.17% improvement in read-write workload performance compared to v5.4.0. Other workloads show similar performance between the two versions. The test environment includes AWS EC2 instances and the software versions used are PD v5.4.0 and v6.0.0, TiDB v5.4.0 and v6.0.0, TiKV v5.4.0 and v6.0.0, and Sysbench 1.1.0-df89d34. The parameter configurations for TiDB, TiKV, and global variables are also provided. The test plan involves deploying TiDB, importing tables, executing statements, and performing stress tests via HAProxy. Test results show performance comparisons for point select, update non-index, update index, and read-write workloads.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB Sysbench Performance Test Report -- v6.0.0 vs. v5.4.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v6.1.0-vs-v6.0.0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v6.1.0 vs. v6.0.0
summary: TiDB v6.1.0 shows improved performance in write-heavy workloads compared to v6.0.0, with a 2.33% ~ 4.61% improvement. The test environment includes AWS EC2 instances and Sysbench 1.1.0-df89d34. Both versions use the same parameter configuration. Test plan involves deploying, importing data, and performing stress tests. Results show slight drop in Point Select performance, while Update Non-index, Update Index, and Read Write performance are improved by 2.90%, 4.61%, and 2.23% respectively.
---

# TiDB Sysbench Performance Test Report -- v6.1.0 vs. v6.0.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-sysbench-v6.2.0-vs-v6.1.0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB Sysbench Performance Test Report -- v6.2.0 vs. v6.1.0
summary: TiDB v6.2.0 and v6.1.0 show similar performance in the Sysbench test. Point Select performance slightly drops by 3.58%. Update Non-index and Update Index performance are basically unchanged, reduced by 0.85% and 0.47% respectively. Read Write performance is reduced by 1.21%.
---

# TiDB Sysbench Performance Test Report -- v6.2.0 vs. v6.1.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-tidb-using-sysbench.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: How to Test TiDB Using Sysbench
aliases: ['/docs/dev/benchmark/benchmark-tidb-using-sysbench/','/docs/dev/benchmark/how-to-run-sysbench/']
summary: TiDB performance can be optimized by using Sysbench 1.0 or later. Configure TiDB and TiKV with higher log levels for better performance. Adjust Sysbench configuration and import data to optimize performance. Address common issues related to proxy use and CPU utilization rates.
---

# How to Test TiDB Using Sysbench
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-tidb-using-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: How to Run TPC-C Test on TiDB
aliases: ['/docs/dev/benchmark/benchmark-tidb-using-tpcc/','/docs/dev/benchmark/how-to-run-tpcc/']
summary: This document describes how to test TiDB using TPC-C, an online transaction processing benchmark. It specifies the initial state of the database, provides commands for loading data, running the test, and cleaning up test data. The test measures the maximum qualified throughput using tpmC (transactions per minute).
---

# How to Run TPC-C Test on TiDB
Expand Down
1 change: 1 addition & 0 deletions benchmark/benchmark-tpch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: TiDB TPC-H 50G Performance Test Report V2.0
aliases: ['/docs/dev/benchmark/benchmark-tpch/','/docs/dev/benchmark/tpch/']
summary: TiDB TPC-H 50G Performance Test compared TiDB 1.0 and TiDB 2.0 in an OLAP scenario. Test results show that TiDB 2.0 outperformed TiDB 1.0 in most queries, with significant improvements in query processing time. Some queries in TiDB 1.0 did not return results, while others had high memory consumption. Future releases plan to support VIEW and address these issues.
---

# TiDB TPC-H 50G Performance Test Report
Expand Down
1 change: 1 addition & 0 deletions benchmark/v3.0-performance-benchmarking-with-sysbench.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: TiDB Sysbench Performance Test Report -- v3.0 vs. v2.1
aliases: ['/docs/dev/benchmark/v3.0-performance-benchmarking-with-sysbench/','/docs/dev/benchmark/sysbench-v4/']
summary: TiDB v3.0 and v2.1 were compared in an OLTP scenario test in June 2019 in Beijing. The test ran on AWS EC2 using CentOS-7.6.1810-Nitro image. Sysbench was used to import 16 tables with 10,000,000 rows each. TiDB v3.0 outperformed v2.1 in all tests, with higher QPS and lower latency. Configuration changes in v3.0 contributed to the improved performance.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB Sysbench Performance Test Report -- v3.0 vs. v2.1
Expand Down
1 change: 1 addition & 0 deletions benchmark/v3.0-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: TiDB TPC-C Performance Test Report -- v3.0 vs. v2.1
aliases: ['/docs/dev/benchmark/v3.0-performance-benchmarking-with-tpcc/','/docs/dev/benchmark/tpcc/']
summary: TiDB v3.0 outperforms v2.1 in TPC-C performance test. With 1000 warehouses, v3.0 achieved 450% higher performance than v2.1.
---

# TiDB TPC-C Performance Test Report -- v3.0 vs. v2.1
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.0-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v5.0 vs. v4.0
summary: TiDB v5.0 outperforms v4.0 in TPC-C performance, showing a 36% increase.
---

# TiDB TPC-C Performance Test Report -- v5.0 vs. v4.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.1-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v5.1.0 vs. v5.0.2
summary: TiDB v5.1.0 TPC-C performance is 2.8% better than v5.0.2. Test environment AWS EC2. Hardware PD m5.xlarge (3), TiKV i3.4xlarge (3), TiDB c5.4xlarge (3), TPC-C c5.9xlarge (1). Software PD, TiDB, TiKV v5.0.2 and v5.1.0, TiUP 1.5.1. Parameter configuration is the same for both versions. Test plan includes deployment, database creation, data import, stress testing, and result extraction.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB TPC-C Performance Test Report -- v5.1.0 vs. v5.0.2
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.2-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v5.2.0 vs. v5.1.1
summary: TiDB v5.2.0 TPC-C performance is 4.22% lower than v5.1.1. Test environment AWS EC2. Hardware and software configurations are the same for both versions. Test plan includes deployment, database creation, data import, stress testing, and result extraction.
---

# TiDB TPC-C Performance Test Report -- v5.2.0 vs. v5.1.1
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.3-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v5.3.0 vs. v5.2.2
summary: TiDB v5.3.0 TPC-C performance is slightly reduced by 2.99% compared to v5.2.2. The test used AWS EC2 with specific hardware and software configurations. The test plan involved deploying TiDB, creating a database, importing data, and running stress tests. The result showed a decrease in performance across different thread counts.
---

# TiDB TPC-C Performance Test Report -- v5.3.0 vs. v5.2.2
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.4-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v5.4.0 vs. v5.3.0
summary: TiDB v5.4.0 TPC-C performance is 3.16% better than v5.3.0. The improvement is consistent across different thread counts 2.80% (50 threads), 4.27% (100 threads), 3.45% (200 threads), and 2.11% (400 threads).
---

# TiDB TPC-C Performance Test Report -- v5.4.0 vs. v5.3.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/v5.4-performance-benchmarking-with-tpch.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-H Performance Test Report -- v5.4 MPP mode vs. Greenplum 6.15.0 and Apache Spark 3.1.1
summary: TiDB v5.4 MPP mode outperforms Greenplum 6.15.0 and Apache Spark 3.1.1 in TPC-H 100 GB performance test. TiDB's MPP mode is 2-3 times faster. Test environment includes hardware and software prerequisites, and parameter configurations for each solution. Test results show TiDB v5.4 has significantly lower query execution times compared to Greenplum and Apache Spark.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB TPC-H Performance Test Report -- TiDB v5.4 MPP mode vs. Greenplum 6.15.0 and Apache Spark 3.1.1
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.0-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v6.0.0 vs. v5.4.0
summary: TiDB v6.0.0 TPC-C performance is 24.20% better than v5.4.0. The improvement is consistent across different thread counts, with the highest improvement at 26.97% for 100 threads.
---

# TiDB TPC-C Performance Test Report -- v6.0.0 vs. v5.4.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.0-performance-benchmarking-with-tpch.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Performance Comparison between TiFlash and Greenplum/Spark
summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report for details.
---

# Performance Comparison between TiFlash and Greenplum/Spark
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.1-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v6.1.0 vs. v6.0.0
summary: TiDB v6.1.0 TPC-C performance is 2.85% better than v6.0.0. Test environment AWS EC2, with hardware and software configurations. TiDB and TiKV parameter configurations are the same for both versions. HAProxy is used for load balancing. Test data preparation involves database deployment, data import, and stress testing. Results show performance improvement across different thread counts.
ran-huang marked this conversation as resolved.
Show resolved Hide resolved
---

# TiDB TPC-C Performance Test Report -- v6.1.0 vs. v6.0.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.1-performance-benchmarking-with-tpch.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Performance Comparison between TiFlash and Greenplum/Spark
summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report for details.
---

# Performance Comparison between TiFlash and Greenplum/Spark
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.2-performance-benchmarking-with-tpcc.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: TiDB TPC-C Performance Test Report -- v6.2.0 vs. v6.1.0
summary: TiDB v6.2.0 TPC-C performance declined by 2.00% compared to v6.1.0. The test used AWS EC2 with specific hardware and software configurations. Test data was prepared and stress tests were conducted via HAProxy. Results showed a decline in performance across different thread counts.
---

# TiDB TPC-C Performance Test Report -- v6.2.0 vs. v6.1.0
Expand Down
1 change: 1 addition & 0 deletions benchmark/v6.2-performance-benchmarking-with-tpch.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Performance Comparison between TiFlash and Greenplum/Spark
summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report at the provided link.
---

# Performance Comparison between TiFlash and Greenplum/Spark
Expand Down
2 changes: 1 addition & 1 deletion best-practices/grafana-monitor-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for Monitoring TiDB Using Grafana
summary: Learn seven tips for efficiently using Grafana to monitor TiDB.
summary: Best Practices for Monitoring TiDB Using Grafana. Deploy a TiDB cluster using TiUP and add Grafana and Prometheus for monitoring. Use metrics to analyze cluster status and diagnose problems. Prometheus collects metrics from TiDB components, and Grafana displays them. Tips for efficient Grafana use include modifying query expressions, switching Y-axis scale, and using API for query results. The platform is powerful for analyzing and diagnosing TiDB cluster status.
aliases: ['/docs/dev/best-practices/grafana-monitor-best-practices/','/docs/dev/reference/best-practices/grafana-monitor/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/haproxy-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for Using HAProxy in TiDB
summary: This document describes best practices for configuration and usage of HAProxy in TiDB.
summary: HAProxy is a free, open-source load balancer and proxy server for TCP and HTTP-based applications. It provides high availability, load balancing, health checks, sticky sessions, SSL support, and monitoring. To deploy HAProxy, ensure hardware and software requirements are met, then install and configure it. Use the latest stable version for best results.
aliases: ['/docs/dev/best-practices/haproxy-best-practices/','/docs/dev/reference/best-practices/haproxy/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/high-concurrency-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Highly Concurrent Write Best Practices
summary: Learn best practices for highly-concurrent write-intensive workloads in TiDB.
summary: This document provides best practices for handling highly-concurrent write-heavy workloads in TiDB. It addresses challenges and solutions for data distribution, hotspot cases, and complex hotspot problems. The article also discusses parameter configuration for optimizing performance.
aliases: ['/docs/dev/best-practices/high-concurrency-best-practices/','/docs/dev/reference/best-practices/high-concurrency/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/java-app-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for Developing Java Applications with TiDB
summary: Learn the best practices for developing Java applications with TiDB.
summary: This document introduces best practices for developing Java applications with TiDB, covering database-related components, JDBC usage, connection pool configuration, data access framework, Spring Transaction, and troubleshooting tools. TiDB is highly compatible with MySQL, so most MySQL-based Java application best practices also apply to TiDB.
aliases: ['/docs/dev/best-practices/java-app-best-practices/','/docs/dev/reference/best-practices/java-app/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/massive-regions-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for TiKV Performance Tuning with Massive Regions
summary: Learn how to tune the performance of TiKV with a massive amount of Regions.
summary: TiKV performance tuning involves reducing the number of Regions and messages, increasing Raftstore concurrency, enabling Hibernate Region and Region Merge, adjusting Raft base tick interval, increasing TiKV instances, and adjusting Region size. Other issues include slow PD leader switching and outdated PD routing information.
aliases: ['/docs/dev/best-practices/massive-regions-best-practices/','/docs/dev/reference/best-practices/massive-regions/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/pd-scheduling-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: PD Scheduling Best Practices
summary: Learn best practice and strategy for PD scheduling.
summary: This document summarizes PD scheduling best practices, including scheduling process, load balancing, hot regions scheduling, cluster topology awareness, scale-down and failure recovery, region merge, query scheduling status, and control scheduling strategy. It also covers common scenarios such as uneven distribution of leaders/regions, slow node recovery, and troubleshooting TiKV nodes.
aliases: ['/docs/dev/best-practices/pd-scheduling-best-practices/','/docs/dev/reference/best-practices/pd-scheduling/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/readonly-nodes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for Read-Only Storage Nodes
summary: Learn how to configure read-only storage nodes to physically isolate important online services.
summary: This document introduces configuring read-only storage nodes for isolating high-tolerance delay loads from online services. Steps include marking TiKV nodes as read-only, using Placement Rules to store data on read-only nodes as learners, and using Follower Read to read data from read-only nodes.
---

# Best Practices for Read-Only Storage Nodes
Expand Down
2 changes: 1 addition & 1 deletion best-practices/three-dc-local-read.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Local Read under Three Data Centers Deployment
summary: Learn how to use the Stale Read feature to read local data under three DCs deployment and thus reduce cross-center requests.
summary: TiDB's three data center deployment model can cause increased access latency due to cross-center data reads. To mitigate this, the Stale Read feature allows for local historical data access, reducing latency at the expense of real-time data availability. When using Stale Read in geo-distributed scenarios, TiDB accesses local replicas to avoid cross-center network latency. This is achieved by configuring the `zone` label and setting `tidb_replica_read` to `closest-replicas`. For more information on performing Stale Read, refer to the documentation.
---

# Local Read under Three Data Centers Deployment
Expand Down
2 changes: 1 addition & 1 deletion best-practices/three-nodes-hybrid-deployment.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Best Practices for Three-Node Hybrid Deployment
summary: Learn the best practices for three-node hybrid deployment.
summary: TiDB cluster can be deployed in a cost-effective way on three machines. Best practices for this hybrid deployment include adjusting parameters for stability and performance. Limiting resource consumption and adjusting thread pool sizes are key to optimizing the cluster. Adjusting parameters for TiKV background tasks and TiDB execution operators is also important.
---

# Best Practices for Three-Node Hybrid Deployment
Expand Down
2 changes: 1 addition & 1 deletion best-practices/tidb-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: TiDB Best Practices
summary: Learn the best practices of using TiDB.
summary: This document summarizes best practices for using TiDB, covering SQL use and optimization tips for OLAP and OLTP scenarios, with a focus on TiDB-specific optimization options. It also recommends reading three blog posts introducing TiDB's technical principles before diving into the best practices.
aliases: ['/docs/dev/tidb-best-practices/']
---

Expand Down
2 changes: 1 addition & 1 deletion best-practices/uuid.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: UUID Best Practices
summary: Learn best practice and strategy for using UUIDs with TiDB.
summary: UUIDs, when used as primary keys, offer benefits such as reduced network trips, support in most programming languages and databases, and protection against enumeration attacks. Storing UUIDs as binary in a `BINARY(16)` column is recommended. It's also advised to avoid setting the `swap_flag` with TiDB to prevent hotspots. MySQL compatibility is available for UUIDs.
---

# UUID Best Practices
Expand Down
Loading