From e90efe97daf5fe9036a21434698dca3d031c05ae Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 29 Apr 2024 10:47:57 +0800 Subject: [PATCH 1/8] lightning: updated toc sections (#17382) --- TOC.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/TOC.md b/TOC.md index 532a8b0fd1737..7e64f636574dc 100644 --- a/TOC.md +++ b/TOC.md @@ -526,10 +526,13 @@ - [Target Database Requirements](/tidb-lightning/tidb-lightning-requirements.md) - Data Sources - [Data Match Rules](/tidb-lightning/tidb-lightning-data-source.md) + - [Rename databases and tables](/tidb-lightning/tidb-lightning-data-source.md#rename-databases-and-tables) - [CSV](/tidb-lightning/tidb-lightning-data-source.md#csv) - [SQL](/tidb-lightning/tidb-lightning-data-source.md#sql) - [Parquet](/tidb-lightning/tidb-lightning-data-source.md#parquet) + - [Compressed files](/tidb-lightning/tidb-lightning-data-source.md#compressed-files) - [Customized File](/tidb-lightning/tidb-lightning-data-source.md#match-customized-files) + - [Import data from Amazon S3](/tidb-lightning/tidb-lightning-data-source.md#import-data-from-amazon-s3) - Physical Import Mode - [Requirements and Limitations](/tidb-lightning/tidb-lightning-physical-import-mode.md) - [Use Physical Import Mode](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md) From 64c1053a098aad1e828decd9ac30f8a855e49ded Mon Sep 17 00:00:00 2001 From: Monday <73807364+mondayinsighter@users.noreply.github.com> Date: Mon, 29 Apr 2024 13:50:58 +0800 Subject: [PATCH 2/8] Fix typo: retuns to returns at functions-and-operators/tidb-functions.md (#17373) --- functions-and-operators/tidb-functions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/functions-and-operators/tidb-functions.md b/functions-and-operators/tidb-functions.md index ee86bd482b449..4f8683a050f85 100644 --- a/functions-and-operators/tidb-functions.md +++ b/functions-and-operators/tidb-functions.md @@ -149,7 +149,7 @@ ORDER BY 4 rows in set (0.031 sec) ``` -`TIDB_DECODE_KEY` returns valid JSON on success and retuns the argument value if it fails to decode. +`TIDB_DECODE_KEY` returns valid JSON on success and returns the argument value if it fails to decode. ### TIDB_DECODE_PLAN From 2c13d8b3c08290b66215f323be32160e37764e6a Mon Sep 17 00:00:00 2001 From: Monday <73807364+mondayinsighter@users.noreply.github.com> Date: Mon, 29 Apr 2024 13:55:27 +0800 Subject: [PATCH 3/8] Fix typo: Schmea to Schema at encryption-at-rest.md (#17374) --- encryption-at-rest.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/encryption-at-rest.md b/encryption-at-rest.md index 8671963d3118f..b36c4ced30b99 100644 --- a/encryption-at-rest.md +++ b/encryption-at-rest.md @@ -287,7 +287,7 @@ The encryption algorithm currently supported by TiFlash is consistent with that The same master key can be shared by multiple instances of TiFlash, and can also be shared among TiFlash and TiKV. The recommended way to provide a master key in production is via AWS KMS. Alternatively, if using custom key is desired, supplying the master key via file is also supported. The specific method to generate master key and the format of the master key are the same as TiKV. -TiFlash uses the current data key to encrypt all data placed on the disk, including data files, Schmea files, and temporary data files generated during calculations. Data keys are automatically rotated by TiFlash every week by default, and the period is configurable. On key rotation, TiFlash does not rewrite all existing files to replace the key, but background compaction tasks are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiFlash keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads. +TiFlash uses the current data key to encrypt all data placed on the disk, including data files, Schema files, and temporary data files generated during calculations. Data keys are automatically rotated by TiFlash every week by default, and the period is configurable. On key rotation, TiFlash does not rewrite all existing files to replace the key, but background compaction tasks are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiFlash keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads. ### Key creation From 6936c1e67ecc67c32653e6caf8c8baa2e7d58530 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 29 Apr 2024 14:22:28 +0800 Subject: [PATCH 4/8] fix typos in docs (#17381) --- CONTRIBUTING.md | 2 +- benchmark/benchmark-tidb-using-sysbench.md | 2 +- best-practices-on-public-cloud.md | 2 +- check-before-deployment.md | 2 +- configure-memory-usage.md | 2 +- dashboard/dashboard-session-sso.md | 2 +- ddl-introduction.md | 2 +- dm/dm-enable-tls.md | 2 +- dm/dm-faq.md | 2 +- dm/dm-open-api.md | 2 +- dm/dm-table-routing.md | 2 +- dm/monitor-a-dm-cluster.md | 2 +- dm/quick-start-create-source.md | 2 +- explain-index-merge.md | 2 +- faq/manage-cluster-faq.md | 2 +- faq/migration-tidb-faq.md | 2 +- faq/sql-faq.md | 2 +- functions-and-operators/precision-math.md | 2 +- functions-and-operators/string-functions.md | 4 ++-- grafana-pd-dashboard.md | 2 +- information-schema/information-schema-deadlocks.md | 2 +- migrate-small-mysql-to-tidb.md | 4 ++-- migrate-with-pt-ghost.md | 2 +- online-unsafe-recovery.md | 4 ++-- oracle-functions-to-tidb.md | 4 ++-- partitioned-table.md | 2 +- performance-tuning-methods.md | 4 ++-- performance-tuning-overview.md | 2 +- releases/release-2.0-ga.md | 2 +- releases/release-2.1-ga.md | 2 +- releases/release-2.1.17.md | 2 +- releases/release-3.0.10.md | 2 +- releases/release-3.0.4.md | 2 +- releases/release-4.0.0-beta.1.md | 2 +- releases/release-4.0.0-rc.2.md | 4 ++-- releases/release-4.0.0-rc.md | 6 +++--- releases/release-4.0.16.md | 2 +- releases/release-4.0.6.md | 2 +- releases/release-5.0.0.md | 2 +- releases/release-5.0.6.md | 4 ++-- releases/release-5.1.4.md | 2 +- releases/release-5.2.2.md | 2 +- releases/release-5.3.0.md | 4 ++-- releases/release-6.1.0.md | 2 +- releases/release-6.1.1.md | 2 +- releases/release-6.3.0.md | 4 ++-- releases/release-6.4.0.md | 4 ++-- releases/release-6.5.1.md | 2 +- releases/release-6.5.4.md | 4 ++-- releases/release-6.6.0.md | 2 +- releases/release-7.1.3.md | 2 +- replicate-data-to-kafka.md | 2 +- runtime-filter.md | 2 +- security-compatibility-with-mysql.md | 2 +- sql-plan-replayer.md | 2 +- sql-statements/sql-statement-alter-index.md | 2 +- sql-statements/sql-statement-explain-analyze.md | 2 +- sql-statements/sql-statement-explain.md | 2 +- system-variables.md | 4 ++-- ticdc/monitor-ticdc.md | 4 ++-- ticdc/ticdc-faq.md | 2 +- ticdc/ticdc-manage-changefeed.md | 2 +- ticdc/ticdc-open-api.md | 2 +- tidb-cloud/data-service-oas-with-nextjs.md | 2 +- tidb-cloud/import-snapshot-files.md | 2 +- tidb-cloud/migrate-from-op-tidb.md | 2 +- tidb-cloud/notification-2023-09-26-console-maintenance.md | 2 +- tidb-cloud/serverless-driver-node-example.md | 2 +- tidb-cloud/terraform-use-cluster-resource.md | 2 +- tidb-cloud/tidb-cloud-import-local-files.md | 2 +- tidb-configuration-file.md | 2 +- tiflash/tiflash-spill-disk.md | 4 ++-- tiflash/tune-tiflash-performance.md | 4 ++-- tikv-configuration-file.md | 2 +- time-to-live.md | 4 ++-- tiproxy/tiproxy-command-line-flags.md | 2 +- tispark-overview.md | 4 ++-- tiup/tiup-mirror-reference.md | 2 +- tune-tikv-memory-performance.md | 2 +- 79 files changed, 98 insertions(+), 98 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3d9ccc98b1ad2..a95ce9e6ca663 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -44,7 +44,7 @@ Please check out these templates before you submit a pull request: We use separate branches to maintain different versions of TiDB documentation. - The [documentation under development](https://docs.pingcap.com/tidb/dev) is maintained in the `master` branch. -- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch. +- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch. - The [archived documentation](https://docs-archive.pingcap.com/) is no longer maintained and does not receive any further updates. ### Use cherry-pick labels diff --git a/benchmark/benchmark-tidb-using-sysbench.md b/benchmark/benchmark-tidb-using-sysbench.md index 5977f46af152e..f821bdb1ff03f 100644 --- a/benchmark/benchmark-tidb-using-sysbench.md +++ b/benchmark/benchmark-tidb-using-sysbench.md @@ -20,7 +20,7 @@ server_configs: log.level: "error" ``` -It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documetnation about what the SQL plan cache does and how to monitor it. +It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documentation about what the SQL plan cache does and how to monitor it. > **Note:** > diff --git a/best-practices-on-public-cloud.md b/best-practices-on-public-cloud.md index d4677de975273..06497bfaf2f3b 100644 --- a/best-practices-on-public-cloud.md +++ b/best-practices-on-public-cloud.md @@ -180,7 +180,7 @@ To reduce the number of Regions and alleviate the heartbeat overhead on the syst ## After tuning -After the tunning, the following effects can be observed: +After the tuning, the following effects can be observed: - The TSO requests per second are decreased to 64,800. - The CPU utilization is significantly reduced from approximately 4,600% to 1,400%. diff --git a/check-before-deployment.md b/check-before-deployment.md index 38a5e3b52b935..743d615d533e9 100644 --- a/check-before-deployment.md +++ b/check-before-deployment.md @@ -269,7 +269,7 @@ To check whether the NTP service is installed and whether it synchronizes with t Unable to talk to NTP daemon. Is it running? ``` -3. Run the `chronyc tracking` command to check wheter the Chrony service synchronizes with the NTP server. +3. Run the `chronyc tracking` command to check whether the Chrony service synchronizes with the NTP server. > **Note:** > diff --git a/configure-memory-usage.md b/configure-memory-usage.md index de90b3dbde9d9..71e03b7661184 100644 --- a/configure-memory-usage.md +++ b/configure-memory-usage.md @@ -57,7 +57,7 @@ Currently, the memory limit set by `tidb_server_memory_limit` **DOES NOT** termi > > + During the startup process, TiDB does not guarantee that the [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) limit is enforced. If the free memory of the operating system is insufficient, TiDB might still encounter OOM. You need to ensure that the TiDB instance has enough available memory. > + In the process of memory control, the total memory usage of TiDB might slightly exceed the limit set by `tidb_server_memory_limit`. -> + Since v6.5.0, the configruation item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`. +> + Since v6.5.0, the configuration item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`. When the memory usage of a tidb-server instance reaches a certain proportion of the total memory (the proportion is controlled by the system variable [`tidb_server_memory_limit_gc_trigger`](/system-variables.md#tidb_server_memory_limit_gc_trigger-new-in-v640)), tidb-server will try to trigger a Golang GC to relieve memory stress. To avoid frequent GCs that cause performance issues due to the instance memory fluctuating around the threshold, this GC method will trigger GC at most once every minute. diff --git a/dashboard/dashboard-session-sso.md b/dashboard/dashboard-session-sso.md index 27ccf6bce77e8..900c1944824d7 100644 --- a/dashboard/dashboard-session-sso.md +++ b/dashboard/dashboard-session-sso.md @@ -104,7 +104,7 @@ First, create an Okta Application Integration to integrate SSO. ![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png) -4. In the poped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**. +4. In the popped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**. 5. Choose **Single-Page Application** in **Application Type**. diff --git a/ddl-introduction.md b/ddl-introduction.md index 31ec8f0234410..7658ccfae57e7 100644 --- a/ddl-introduction.md +++ b/ddl-introduction.md @@ -77,7 +77,7 @@ absent -> delete only -> write only -> write reorg -> public For users, the newly created index is unavailable before the `public` state. -
+
Before v6.2.0, the process of handling asynchronous schema changes in the TiDB SQL layer is as follows: diff --git a/dm/dm-enable-tls.md b/dm/dm-enable-tls.md index 37dc390701cc9..292be260a1580 100644 --- a/dm/dm-enable-tls.md +++ b/dm/dm-enable-tls.md @@ -109,7 +109,7 @@ This section introduces how to enable encrypted data transmission between DM com ### Enable encrypted data transmission for downstream TiDB -1. Configure the downstream TiDB to use encrypted connections. For detailed operatons, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections). +1. Configure the downstream TiDB to use encrypted connections. For detailed operations, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections). 2. Set the TiDB client certificate in the task configuration file: diff --git a/dm/dm-faq.md b/dm/dm-faq.md index c6075787a505d..5ee968c2986e6 100644 --- a/dm/dm-faq.md +++ b/dm/dm-faq.md @@ -365,7 +365,7 @@ To solve this issue, you are recommended to maintain DM clusters using TiUP. In ## Why DM-master cannot be connected when I use dmctl to execute commands? -When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But afer checking the network connection using commands like `telnet `, no exception is found. +When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But after checking the network connection using commands like `telnet `, no exception is found. In this case, you can check the environment variable `https_proxy` (note that it is **https**). If this variable is configured, dmctl automatically connects the host and port specified by `https_proxy`. If the host does not have a corresponding `proxy` forwarding service, the connection fails. diff --git a/dm/dm-open-api.md b/dm/dm-open-api.md index c0fefce993896..ada86e1cbf718 100644 --- a/dm/dm-open-api.md +++ b/dm/dm-open-api.md @@ -1346,7 +1346,7 @@ curl -X 'GET' \ "name": "string", "source_name": "string", "worker_name": "string", - "stage": "runing", + "stage": "running", "unit": "sync", "unresolved_ddl_lock_id": "string", "load_status": { diff --git a/dm/dm-table-routing.md b/dm/dm-table-routing.md index 63f1089e56ef6..2625c6aaf1fb1 100644 --- a/dm/dm-table-routing.md +++ b/dm/dm-table-routing.md @@ -86,7 +86,7 @@ To migrate the upstream instances to the downstream `test`.`t`, you must create Assuming in the scenario of sharded schemas and tables, you want to migrate the `test_{1,2,3...}`.`t_{1,2,3...}` tables in two upstream MySQL instances to the `test`.`t` table in the downstream TiDB instance. At the same time, you want to extract the source information of the sharded tables and write it to the downstream merged table. -To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addtion, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations: +To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addition, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations: - `extract-table`: For a sharded table matching `schema-pattern` and `table-pattern`, DM extracts the sharded table name by using `table-regexp` and writes the name suffix without the `t_` part to `target-column` of the merged table, that is, the `c_table` column. - `extract-schema`: For a sharded schema matching `schema-pattern` and `table-pattern`, DM extracts the sharded schema name by using `schema-regexp` and writes the name suffix without the `test_` part to `target-column` of the merged table, that is, the `c_schema` column. diff --git a/dm/monitor-a-dm-cluster.md b/dm/monitor-a-dm-cluster.md index f40a66dc3d709..2af6bb1ac852e 100644 --- a/dm/monitor-a-dm-cluster.md +++ b/dm/monitor-a-dm-cluster.md @@ -94,7 +94,7 @@ The following metrics show only when `task-mode` is in the `incremental` or `all | total sqls jobs | The number of newly added jobs per unit of time | N/A | N/A | | finished sqls jobs | The number of finished jobs per unit of time | N/A | N/A | | statement execution latency | The duration that the binlog replication unit executes the statement to the downstream (in seconds) | N/A | N/A | -| add job duration | The duration tht the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A | +| add job duration | The duration that the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A | | DML conflict detect duration | The duration that the binlog replication unit detects the conflict in DML (in seconds) | N/A | N/A | | skipped event duration | The duration that the binlog replication unit skips a binlog event (in seconds) | N/A | N/A | | unsynced tables | The number of tables that have not received the shard DDL statement in the current subtask | N/A | N/A | diff --git a/dm/quick-start-create-source.md b/dm/quick-start-create-source.md index a3af12981f8a6..d2c18e1675e2e 100644 --- a/dm/quick-start-create-source.md +++ b/dm/quick-start-create-source.md @@ -84,7 +84,7 @@ The returned results are as follows: After creating a data source, you can use the following command to query the data source: -- If you konw the `source-id` of the data source, you can use the `dmctl config source ` command to directly check the configuration of the data source: +- If you know the `source-id` of the data source, you can use the `dmctl config source ` command to directly check the configuration of the data source: {{< copyable "shell-regular" >}} diff --git a/explain-index-merge.md b/explain-index-merge.md index 0e68f80ffff58..df3a4224eab47 100644 --- a/explain-index-merge.md +++ b/explain-index-merge.md @@ -94,6 +94,6 @@ When using the intersection-type index merge to access tables, the optimizer can > > - If the optimizer can choose the single index scan method (other than full table scan) for a query plan, the optimizer will not automatically use index merge. For the optimizer to use index merge, you need to use the optimizer hint. > -> - Index Merge is not supported in [tempoaray tables](/temporary-tables.md) for now. +> - Index Merge is not supported in [temporary tables](/temporary-tables.md) for now. > > - The intersection-type index merge will not automatically be selected by the optimizer. You must specify the **table name and index name** using the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint for it to be selected. diff --git a/faq/manage-cluster-faq.md b/faq/manage-cluster-faq.md index e56982aa51da7..1a6955b1e1cf6 100644 --- a/faq/manage-cluster-faq.md +++ b/faq/manage-cluster-faq.md @@ -73,7 +73,7 @@ TiDB provides a few features and [tools](/ecosystem-tool-user-guide.md), with wh The TiDB community is highly active. The engineers have been keeping optimizing features and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Release Timeline](/releases/release-timeline.md). -It is recommeneded to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods: +It is recommended to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods: - `select tidb_version()` - `tidb-server -V` diff --git a/faq/migration-tidb-faq.md b/faq/migration-tidb-faq.md index 190deb6e80886..477c9559bc390 100644 --- a/faq/migration-tidb-faq.md +++ b/faq/migration-tidb-faq.md @@ -93,7 +93,7 @@ To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see Currently, it is recommended to use OGG. -### Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches` +### Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches` In Sqoop, `--batch` means committing 100 `statement`s in each batch, but by default each `statement` contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction. diff --git a/faq/sql-faq.md b/faq/sql-faq.md index 967cea5adaea1..f28100b19b2a8 100644 --- a/faq/sql-faq.md +++ b/faq/sql-faq.md @@ -151,7 +151,7 @@ TiDB supports modifying the [`sql_mode`](/system-variables.md#sql_mode) system v - Changes to [`GLOBAL`](/sql-statements/sql-statement-set-variable.md) scoped variables propagate to the rest servers of the cluster and persist across restarts. This means that you do not need to change the `sql_mode` value on each TiDB server. - Changes to `SESSION` scoped variables only affect the current client session. After restarting a server, the changes are lost. -## Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches +## Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches In Sqoop, `--batch` means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction. diff --git a/functions-and-operators/precision-math.md b/functions-and-operators/precision-math.md index fca0983e8ab41..e3916a1b75b92 100644 --- a/functions-and-operators/precision-math.md +++ b/functions-and-operators/precision-math.md @@ -51,7 +51,7 @@ DECIMAL columns do not store a leading `+` character or `-` character or leading DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point. -For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB souce code. +For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB source code. ## Expression handling diff --git a/functions-and-operators/string-functions.md b/functions-and-operators/string-functions.md index a57c0bb6f5249..32c8d83c73a8e 100644 --- a/functions-and-operators/string-functions.md +++ b/functions-and-operators/string-functions.md @@ -218,10 +218,10 @@ SELECT CHAR_LENGTH("TiDB") AS LengthOfString; ``` ```sql -SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LenghtOfName FROM Customers; +SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LengthOfName FROM Customers; +--------------------+--------------+ -| CustomerName | LenghtOfName | +| CustomerName | LengthOfName | +--------------------+--------------+ | Albert Einstein | 15 | | Robert Oppenheimer | 18 | diff --git a/grafana-pd-dashboard.md b/grafana-pd-dashboard.md index 13790d5aa6f41..048dc0c7b3d69 100644 --- a/grafana-pd-dashboard.md +++ b/grafana-pd-dashboard.md @@ -78,7 +78,7 @@ The following is the description of PD Dashboard metrics items: - Store Write rate keys: The total written keys on each TiKV instance - Hot cache write entry number: The number of peers on each TiKV instance that are in the write hotspot statistics module - Selector events: The event count of Selector in the hotspot scheduling module -- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negtive number means scheduling out of the instance +- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance - Direction of hotspot move peer: The direction of peer movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance ![PD Dashboard - Hot write metrics](/media/pd-dashboard-hotwrite-v4.png) diff --git a/information-schema/information-schema-deadlocks.md b/information-schema/information-schema-deadlocks.md index e755d59cda0f9..debbeb65834c2 100644 --- a/information-schema/information-schema-deadlocks.md +++ b/information-schema/information-schema-deadlocks.md @@ -12,7 +12,7 @@ USE INFORMATION_SCHEMA; DESC deadlocks; ``` -Thhe output is as follows: +The output is as follows: ```sql +-------------------------+---------------------+------+------+---------+-------+ diff --git a/migrate-small-mysql-to-tidb.md b/migrate-small-mysql-to-tidb.md index 6846b8a792ab2..01c8731560ee2 100644 --- a/migrate-small-mysql-to-tidb.md +++ b/migrate-small-mysql-to-tidb.md @@ -137,8 +137,8 @@ To view the historical status of the migration task and other internal metrics, If you have deployed Prometheus, Alertmanager, and Grafana when deploying DM using TiUP, you can access Grafana using the IP address and port specified during the deployment. You can then select the DM dashboard to view DM-related monitoring metrics. -- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default. -- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default. +- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default. +- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default. ## What's next diff --git a/migrate-with-pt-ghost.md b/migrate-with-pt-ghost.md index 6b49e1c677523..4505e7a38c3f6 100644 --- a/migrate-with-pt-ghost.md +++ b/migrate-with-pt-ghost.md @@ -7,7 +7,7 @@ summary: Learn how to use DM to replicate incremental data from databases that u In production scenarios, table locking during DDL execution can block the reads from or writes to the database to a certain extent. Therefore, online DDL tools are often used to execute DDLs to minimize the impact on reads and writes. Common DDL tools are [gh-ost](https://github.com/github/gh-ost) and [pt-osc](https://www.percona.com/doc/percona-toolkit/3.0/pt-online-schema-change.html). -When using DM to migrate data from MySQL to TiDB, you can enbale `online-ddl` to allow collaboration of DM and gh-ost or pt-osc. +When using DM to migrate data from MySQL to TiDB, you can enable `online-ddl` to allow collaboration of DM and gh-ost or pt-osc. For the detailed replication instructions, refer to the following documents by scenarios: diff --git a/online-unsafe-recovery.md b/online-unsafe-recovery.md index 2310c2b546671..591dc9b36c019 100644 --- a/online-unsafe-recovery.md +++ b/online-unsafe-recovery.md @@ -38,7 +38,7 @@ Before using Online Unsafe Recovery, make sure that the following requirements a ### Step 1. Specify the stores that cannot be recovered -To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores [,,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, seperated by commas. +To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores [,,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, separated by commas. {{< copyable "shell-regular" >}} @@ -174,7 +174,7 @@ After the recovery is completed, the data and index might be inconsistent. Use t ADMIN CHECK TABLE table_name; ``` -If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then droping the old index. +If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then dropping the old index. 1. Rename the old index: diff --git a/oracle-functions-to-tidb.md b/oracle-functions-to-tidb.md index 49d1f7c4b3380..44472b49c9640 100644 --- a/oracle-functions-to-tidb.md +++ b/oracle-functions-to-tidb.md @@ -65,13 +65,13 @@ TiDB distinguishes between `NULL` and an empty string `''`. Oracle supports reading and writing to the same table in an `INSERT` statement. For example: ```sql -INSERT INTO table1 VALUES (feild1,(SELECT feild2 FROM table1 WHERE...)) +INSERT INTO table1 VALUES (field1,(SELECT field2 FROM table1 WHERE...)) ``` TiDB does not support reading and writing to the same table in a `INSERT` statement. For example: ```sql -INSERT INTO table1 VALUES (feild1,(SELECT T.fields2 FROM table1 T WHERE...)) +INSERT INTO table1 VALUES (field1,(SELECT T.fields2 FROM table1 T WHERE...)) ``` ### Get the first n rows from a query diff --git a/partitioned-table.md b/partitioned-table.md index 1f614f01babaa..1a70299593a64 100644 --- a/partitioned-table.md +++ b/partitioned-table.md @@ -610,7 +610,7 @@ Starting from v7.0.0, TiDB supports Key partitioning. For TiDB versions earlier Both Key partitioning and Hash partitioning can evenly distribute data into a certain number of partitions. The difference is that Hash partitioning only supports distributing data based on a specified integer expression or an integer column, while Key partitioning supports distributing data based on a column list, and partitioning columns of Key partitioning are not limited to the integer type. The Hash algorithm of TiDB for Key partitioning is different from that of MySQL, so the table data distribution is also different. -To create a Key partitioned table, you need to append a `PARTITION BY KEY (columList)` clause to the `CREATE TABLE` statement. `columList` is a column list with one or more column names. The data type of each column in the list can be any type except `BLOB`, `JSON`, and `GEOMETRY` (Note that TiDB does not support `GEOMETRY`). In addition, you might also need to append `PARTITIONS num` (where `num` is a positive integer indicating how many partitions a table is divided into), or append the definition of the partition names. For example, adding `(PARTITION p0, PARTITION p1)` means dividing the table into two partitions named `p0` and `p1`. +To create a Key partitioned table, you need to append a `PARTITION BY KEY (columnList)` clause to the `CREATE TABLE` statement. `columnList` is a column list with one or more column names. The data type of each column in the list can be any type except `BLOB`, `JSON`, and `GEOMETRY` (Note that TiDB does not support `GEOMETRY`). In addition, you might also need to append `PARTITIONS num` (where `num` is a positive integer indicating how many partitions a table is divided into), or append the definition of the partition names. For example, adding `(PARTITION p0, PARTITION p1)` means dividing the table into two partitions named `p0` and `p1`. The following operation creates a Key partitioned table, which is divided into 4 partitions by `store_id`: diff --git a/performance-tuning-methods.md b/performance-tuning-methods.md index 9d64deb6165c1..af4573457d918 100644 --- a/performance-tuning-methods.md +++ b/performance-tuning-methods.md @@ -187,7 +187,7 @@ The number of `StmtPrepare` commands per second is much greater than that of `St ![OLTP-Query](/media/performance/prepared_statement_leaking.png) -- In the QPS panel, the red bold line indicates the number of failed queries, and the Y axis on the right indicates the coordinate value of the number. In this example, the number of failed quries per second is 74.6. +- In the QPS panel, the red bold line indicates the number of failed queries, and the Y axis on the right indicates the coordinate value of the number. In this example, the number of failed queries per second is 74.6. - In the CPS By Type panel, the number of `StmtPrepare` commands per second is much greater than that of `StmtClose` per second, which indicates that an object leak occurs in the application for prepared statements. - In the Queries Using Plan Cache OPS panel, `avg-miss` is almost equal to `StmtExecute` in the CPS By Type panel, which indicates that almost all SQL executions miss the execution plan cache. @@ -256,7 +256,7 @@ The Duration panel contains the average and P99 latency of all statements, and t - in-txn: The interval between processing the previous SQL and receiving the next SQL statement when the connection is within a transaction. - not-in-txn: The interval between processing the previous SQL and receiving the next SQL statement when the connection is not within a transaction. -An applications perform transactions with the same database connction. By comparing the average query latency with the connection idle duration, you can determine if TiDB is the bottleneck for overall system, or if user response time jitter is caused by TiDB. +An applications perform transactions with the same database connection. By comparing the average query latency with the connection idle duration, you can determine if TiDB is the bottleneck for overall system, or if user response time jitter is caused by TiDB. - If the application workload is not read-only and contains transactions, by comparing the average query latency with `avg-in-txn`, you can determine the proportion in processing transactions inside and outside the database, and identify the bottleneck in user response time. - If the application workload is read-only or autocommit mode is on, you can compare the average query latency with `avg-not-in-txn`. diff --git a/performance-tuning-overview.md b/performance-tuning-overview.md index c4ba353e763f3..4332f9cc9b590 100644 --- a/performance-tuning-overview.md +++ b/performance-tuning-overview.md @@ -107,7 +107,7 @@ After identifying the bottleneck of a system through performance analysis, you c According to [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law), the maximum gain from performance tuning depends on the percentage of the optimized part in the overall system. Therefore, you need to identify the system bottlenecks and the corresponding percentage based on the performance data, and then predict the gains after the bottleneck is resolved or optimized. -Note that even if a solution can bring the greatest potential benefits by tunning the largest bottleneck, you still need to evaluate the risks and costs of this solution. For example: +Note that even if a solution can bring the greatest potential benefits by tuning the largest bottleneck, you still need to evaluate the risks and costs of this solution. For example: - The most straightforward tuning objective solution for a resource-overloaded system is to expand its capacity, but in practice, the expansion solution might be too costly to be adopted. - When a slow query in a business module causes a slow response of the entire module, upgrading to a new version of the database can solve the slow query issue, but it might also affect modules that did not have this issue. Therefore, this solution might have a potentially high risk. A low-risk solution is to skip the database version upgrade and rewrite the existing slow queries for the current database version. diff --git a/releases/release-2.0-ga.md b/releases/release-2.0-ga.md index a48622c9b17f9..f56693e4185a1 100644 --- a/releases/release-2.0-ga.md +++ b/releases/release-2.0-ga.md @@ -68,7 +68,7 @@ On April 27, 2018, TiDB 2.0 GA is released! Compared with TiDB 1.0, this release - Optimize the scheduling policies to prevent the disks from becoming full when the space of TiKV nodes is insufficient - Improve the scheduling efficiency of the balance-leader scheduler - Reduce the scheduling overhead of the balance-region scheduler - - Optimize the execution efficiency of the the hot-region scheduler + - Optimize the execution efficiency of the hot-region scheduler - Operations interface and configuration - Support TLS - Support prioritizing the PD leaders diff --git a/releases/release-2.1-ga.md b/releases/release-2.1-ga.md index 6b52b6ffd5718..c070c48b6b6f5 100644 --- a/releases/release-2.1-ga.md +++ b/releases/release-2.1-ga.md @@ -89,7 +89,7 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this - Check the TiDB cluster information - - [Add the `auto_analyze_ratio` system variables to contorl the ratio of Analyze](/faq/sql-faq.md#whats-the-trigger-strategy-for-auto-analyze-in-tidb) + - [Add the `auto_analyze_ratio` system variables to control the ratio of Analyze](/faq/sql-faq.md#whats-the-trigger-strategy-for-auto-analyze-in-tidb) - [Add the `tidb_retry_limit` system variable to control the automatic retry times of transactions](/system-variables.md#tidb_retry_limit) diff --git a/releases/release-2.1.17.md b/releases/release-2.1.17.md index ba9e51a26aa0f..c2ce2a99b1a6f 100644 --- a/releases/release-2.1.17.md +++ b/releases/release-2.1.17.md @@ -46,7 +46,7 @@ TiDB Ansible version: 2.1.17 - Change `start ts` recorded in slow query logs from the last retry time to the first execution time when retrying TiDB transactions [#11878](https://github.com/pingcap/tidb/pull/11878) - Add the number of keys of a transaction in `LockResolver` to avoid the scan operation on the whole Region and reduce costs of resolving locking when the number of keys is reduced [#11889](https://github.com/pingcap/tidb/pull/11889) - Fix the issue that the `succ` field value might be incorrect in slow query logs [#11886](https://github.com/pingcap/tidb/pull/11886) - - Replace the `Index_ids` filed in slow query logs with the `Index_names` field to improve the usability of slow query logs [#12063](https://github.com/pingcap/tidb/pull/12063) + - Replace the `Index_ids` field in slow query logs with the `Index_names` field to improve the usability of slow query logs [#12063](https://github.com/pingcap/tidb/pull/12063) - Fix the connection break issue caused by TiDB parsing `-` into EOF Error when `Duration` contains `-` (like `select time(‘--’)`) [#11910](https://github.com/pingcap/tidb/pull/11910) - Remove an invalid Region from `RegionCache` more quickly to reduce the number of requests sent to this Region [#11931](https://github.com/pingcap/tidb/pull/11931) - Fix the connection break issue caused by incorrectly handling the OOM panic issue when `oom-action = "cancel"` and OOM occurs in the `Insert Into … Select` syntax [#12126](https://github.com/pingcap/tidb/pull/12126) diff --git a/releases/release-3.0.10.md b/releases/release-3.0.10.md index 662986493769a..bfe5818569f0d 100644 --- a/releases/release-3.0.10.md +++ b/releases/release-3.0.10.md @@ -52,7 +52,7 @@ TiDB Ansible version: 3.0.10 + Raftstore - Fix the system panic issue #6460 or data loss issue #598 caused by Region merge failure [#6481](https://github.com/tikv/tikv/pull/6481) - - Support `yield` to optimize scheduling fairness, and support pre-transfering the leader to improve leader scheduling stability [#6563](https://github.com/tikv/tikv/pull/6563) + - Support `yield` to optimize scheduling fairness, and support pre-transferring the leader to improve leader scheduling stability [#6563](https://github.com/tikv/tikv/pull/6563) ## PD diff --git a/releases/release-3.0.4.md b/releases/release-3.0.4.md index 6b98ff46ac5ec..93df3984ed0b0 100644 --- a/releases/release-3.0.4.md +++ b/releases/release-3.0.4.md @@ -43,7 +43,7 @@ TiDB Ansible version: 3.0.4 ## TiDB - SQL Optimizer - - Fix the issue that invalid query ranges might be resulted when splitted by feedback [#12170](https://github.com/pingcap/tidb/pull/12170) + - Fix the issue that invalid query ranges might be resulted when split by feedback [#12170](https://github.com/pingcap/tidb/pull/12170) - Display the returned error of the `SHOW STATS_BUCKETS` statement in hexadecimal rather than return errors when the result contains invalid Keys [#12094](https://github.com/pingcap/tidb/pull/12094) - Fix the issue that when a query contains the `SLEEP` function (for example, `select 1 from (select sleep(1)) t;)`), column pruning causes invalid `sleep(1)` during query [#11953](https://github.com/pingcap/tidb/pull/11953) - Use index scan to lower IO when a query only concerns the number of columns rather than the table data [#12112](https://github.com/pingcap/tidb/pull/12112) diff --git a/releases/release-4.0.0-beta.1.md b/releases/release-4.0.0-beta.1.md index 03ed69dc18fb2..18fbea5598b25 100644 --- a/releases/release-4.0.0-beta.1.md +++ b/releases/release-4.0.0-beta.1.md @@ -84,7 +84,7 @@ TiDB Ansible version: 4.0.0-beta.1 + Fix the incorrect results of `BatchPointGet` when `plan cache` is enabled [#14855](https://github.com/pingcap/tidb/pull/14855) + Fix the issue that data is inserted into the wrong partitioned table after the timezone is modified [#14370](https://github.com/pingcap/tidb/pull/14370) + Fix the panic occurred when rebuilding expression using the invalid name of the `IsTrue` function during the outer join simplification [#14515](https://github.com/pingcap/tidb/pull/14515) - + Fix the the incorrect privilege check for the`show binding` statement [#14443](https://github.com/pingcap/tidb/pull/14443) + + Fix the incorrect privilege check for the`show binding` statement [#14443](https://github.com/pingcap/tidb/pull/14443) * TiKV + Fix the inconsistent behaviors of the `CAST` function in TiDB and TiKV [#6463](https://github.com/tikv/tikv/pull/6463) [#6461](https://github.com/tikv/tikv/pull/6461) [#6459](https://github.com/tikv/tikv/pull/6459) [#6474](https://github.com/tikv/tikv/pull/6474) [#6492](https://github.com/tikv/tikv/pull/6492) [#6569](https://github.com/tikv/tikv/pull/6569) diff --git a/releases/release-4.0.0-rc.2.md b/releases/release-4.0.0-rc.2.md index 7daa263e87642..9cadff71ef25c 100644 --- a/releases/release-4.0.0-rc.2.md +++ b/releases/release-4.0.0-rc.2.md @@ -89,7 +89,7 @@ TiDB version: 4.0.0-rc.2 - Change the name of the Count graph of **Read Index** in Grafana to **Ops** - Optimize the data for opening file descriptors when the system load is low to reduce system resource consumption - - Add the capacity-related configuration parameter to limit the the data storage capacity + - Add the capacity-related configuration parameter to limit the data storage capacity + Tools @@ -160,7 +160,7 @@ TiDB version: 4.0.0-rc.2 - Fix the issue that the backup data cannot be restored from GCS [#7739](https://github.com/tikv/tikv/pull/7739) - Fix the issue that KMS key ID is not validated during encryption at rest [#7719](https://github.com/tikv/tikv/pull/7719) - Fix the underlying correctness issue of the Coprocessor in compilers of different architecture [#7714](https://github.com/tikv/tikv/pull/7714) [#7730](https://github.com/tikv/tikv/pull/7730) - - Fix the `snapshot ingestion` error when encrytion is enabled [#7815](https://github.com/tikv/tikv/pull/7815) + - Fix the `snapshot ingestion` error when encryption is enabled [#7815](https://github.com/tikv/tikv/pull/7815) - Fix the `Invalid cross-device link` error when rewriting the configuration file [#7817](https://github.com/tikv/tikv/pull/7817) - Fix the issue of wrong toml format when writing the configuration file to an empty file [#7817](https://github.com/tikv/tikv/pull/7817) - Fix the issue that a destroyed peer in Raftstore can still process requests [#7836](https://github.com/tikv/tikv/pull/7836) diff --git a/releases/release-4.0.0-rc.md b/releases/release-4.0.0-rc.md index 08f4c0fb10e30..8ed7c2b47e8f7 100644 --- a/releases/release-4.0.0-rc.md +++ b/releases/release-4.0.0-rc.md @@ -38,7 +38,7 @@ TiUP version: 0.0.3 + TiDB - Fix the issue that replication between the upstream and downstream might go wrong when the DDL job is executed using the `PREPARE` statement because of the incorrect job query in the internal records [#15435](https://github.com/pingcap/tidb/pull/15435) - - Fix the issue of incorrect subquery result in the `Read Commited` isolation level [#15471](https://github.com/pingcap/tidb/pull/15471) + - Fix the issue of incorrect subquery result in the `Read Committed` isolation level [#15471](https://github.com/pingcap/tidb/pull/15471) - Fix the issue of incorrect results caused by the Inline Projection optimization [#15411](https://github.com/pingcap/tidb/pull/15411) - Fix the issue that the SQL Hint `INL_MERGE_JOIN` is executed incorrectly in some cases [#15515](https://github.com/pingcap/tidb/pull/15515) - Fix the issue that columns with the `AutoRandom` attribute are rebased when the negative number is explicitly written to these columns [#15397](https://github.com/pingcap/tidb/pull/15397) @@ -49,7 +49,7 @@ TiUP version: 0.0.3 - Add the case-insensitive collation so that users can enable `utf8mb4_general_ci` and `utf8_general_ci` in a new cluster [#33](https://github.com/pingcap/tidb/projects/33) - Enhance the `RECOVER TABLE` syntax to support recovering truncated tables [#15398](https://github.com/pingcap/tidb/pull/15398) - - Refuse to get started instead of returning an alert log when the the tidb-server status port is occupied [#15177](https://github.com/pingcap/tidb/pull/15177) + - Refuse to get started instead of returning an alert log when the tidb-server status port is occupied [#15177](https://github.com/pingcap/tidb/pull/15177) - Optimize the write performance of using a sequence as the default column values [#15216](https://github.com/pingcap/tidb/pull/15216) - Add the `DDLJobs` system table to query the details of DDL jobs [#14837](https://github.com/pingcap/tidb/pull/14837) - Optimize the `aggFuncSum` performance [#14887](https://github.com/pingcap/tidb/pull/14887) @@ -80,7 +80,7 @@ TiUP version: 0.0.3 + TiDB - Fix the issue that replication between the upstream and downstream might go wrong when the DDL job is executed using the `PREPARE` statement because of the incorrect job query in the internal records [#15435](https://github.com/pingcap/tidb/pull/15435) - - Fix the issue of incorrect subquery result in the `Read Commited` isolation level [#15471](https://github.com/pingcap/tidb/pull/15471) + - Fix the issue of incorrect subquery result in the `Read Committed` isolation level [#15471](https://github.com/pingcap/tidb/pull/15471) - Fix the issue of possible wrong behavior when using `INSERT ... VALUES` to specify the `BIT(N)` data type [#15350](https://github.com/pingcap/tidb/pull/15350) - Fix the issue that the DDL Job internal retry does not fully achieve the expected outcomes because the values of `ErrorCount` fail to be summed correctly [#15373](https://github.com/pingcap/tidb/pull/15373) - Fix the issue that Garbage Collection might work abnormally when TiDB connects to TiFlash [#15505](https://github.com/pingcap/tidb/pull/15505) diff --git a/releases/release-4.0.16.md b/releases/release-4.0.16.md index a1751d6d91a60..c552efed45683 100644 --- a/releases/release-4.0.16.md +++ b/releases/release-4.0.16.md @@ -81,7 +81,7 @@ TiDB version: 4.0.16 + PD - Fix a panic issue that occurs after the TiKV node is removed [#4344](https://github.com/tikv/pd/issues/4344) - - Fix slow leader election caused by stucked region syncer [#3936](https://github.com/tikv/pd/issues/3936) + - Fix slow leader election caused by stuck region syncer [#3936](https://github.com/tikv/pd/issues/3936) - Support that the evict leader scheduler can schedule regions with unhealthy peers [#4093](https://github.com/tikv/pd/issues/4093) + TiFlash diff --git a/releases/release-4.0.6.md b/releases/release-4.0.6.md index 27fbc9c3bd858..fd46dac44a0d3 100644 --- a/releases/release-4.0.6.md +++ b/releases/release-4.0.6.md @@ -106,7 +106,7 @@ TiDB version: 4.0.6 - Fix a bug of converting the `enum` and `set` types [#19778](https://github.com/pingcap/tidb/pull/19778) - Add a privilege check for `SHOW STATS_META` and `SHOW STATS_BUCKET` [#19760](https://github.com/pingcap/tidb/pull/19760) - Fix the error of unmatched column lengths caused by `builtinGreatestStringSig` and `builtinLeastStringSig` [#19758](https://github.com/pingcap/tidb/pull/19758) - - If unnecessary errors or warnings occur, the vectorized control expresions fall back to their scalar execution [#19749](https://github.com/pingcap/tidb/pull/19749) + - If unnecessary errors or warnings occur, the vectorized control expression fall back to their scalar execution [#19749](https://github.com/pingcap/tidb/pull/19749) - Fix the error of the `Apply` operator when the type of the correlation column is `Bit` [#19692](https://github.com/pingcap/tidb/pull/19692) - Fix the issue that occurs when the user queries `processlist` and `cluster_log` in MySQL 8.0 client [#19690](https://github.com/pingcap/tidb/pull/19690) - Fix the issue that plans of the same type have different plan digests [#19684](https://github.com/pingcap/tidb/pull/19684) diff --git a/releases/release-5.0.0.md b/releases/release-5.0.0.md index b4e13c57cfd7a..fa4f630eaa1ff 100644 --- a/releases/release-5.0.0.md +++ b/releases/release-5.0.0.md @@ -334,7 +334,7 @@ You can view the manually or automatically bound execution plan information by r When upgrading TiDB, to avoid performance jitter, you can enable the baseline capturing feature to allow the system to automatically capture and bind the latest execution plan and store it in the system table. After TiDB is upgraded, you can export the bound execution plan by running the `SHOW GLOBAL BINDING` command and decide whether to delete these plans. -This feature is disbled by default. You can enable it by modifying the server or setting the `tidb_capture_plan_baselines` global system variable to `ON`. When this feature is enabled, the system fetches the SQL statements that appear at least twice from the Statement Summary every `bind-info-lease` (the default value is `3s`), and automatically captures and binds these SQL statements. +This feature is disabled by default. You can enable it by modifying the server or setting the `tidb_capture_plan_baselines` global system variable to `ON`. When this feature is enabled, the system fetches the SQL statements that appear at least twice from the Statement Summary every `bind-info-lease` (the default value is `3s`), and automatically captures and binds these SQL statements. ### Improve stability of TiFlash queries diff --git a/releases/release-5.0.6.md b/releases/release-5.0.6.md index 03c185b560291..584e8fee26cd8 100644 --- a/releases/release-5.0.6.md +++ b/releases/release-5.0.6.md @@ -76,7 +76,7 @@ TiDB version: 5.0.6 - Fix the `INDEX OUT OF RANGE` error for a MPP query after deleting an empty `dual table` [#28250](https://github.com/pingcap/tidb/issues/28250) - Fix the TiDB panic when inserting invalid date values concurrently [#25393](https://github.com/pingcap/tidb/issues/25393) - Fix the unexpected `can not found column in Schema column` error for queries in the MPP mode [#30980](https://github.com/pingcap/tidb/issues/30980) - - Fix the issue that TiDB might panic when TiFlash is shuting down [#28096](https://github.com/pingcap/tidb/issues/28096) + - Fix the issue that TiDB might panic when TiFlash is shutting down [#28096](https://github.com/pingcap/tidb/issues/28096) - Fix the unexpected `index out of range` error when the planner is doing join reorder [#24095](https://github.com/pingcap/tidb/issues/24095) - Fix wrong results of the control functions (such as `IF` and `CASE WHEN`) when using the `ENUM` type data as parameters of such functions [#23114](https://github.com/pingcap/tidb/issues/23114) - Fix the wrong result of `CONCAT(IFNULL(TIME(3))` [#29498](https://github.com/pingcap/tidb/issues/29498) @@ -114,7 +114,7 @@ TiDB version: 5.0.6 - Fix a panic issue that occurs after the TiKV node is removed [#4344](https://github.com/tikv/pd/issues/4344) - Fix the issue that operator can get blocked due to down store [#3353](https://github.com/tikv/pd/issues/3353) - - Fix slow leader election caused by stucked Region syncer [#3936](https://github.com/tikv/pd/issues/3936) + - Fix slow leader election caused by stuck Region syncer [#3936](https://github.com/tikv/pd/issues/3936) - Fix the issue that the speed of removing peers is limited when repairing the down nodes [#4090](https://github.com/tikv/pd/issues/4090) - Fix the issue that the hotspot cache cannot be cleared when the Region heartbeat is less than 60 seconds [#4390](https://github.com/tikv/pd/issues/4390) diff --git a/releases/release-5.1.4.md b/releases/release-5.1.4.md index a969762ad9554..1fc9c2ee415d5 100644 --- a/releases/release-5.1.4.md +++ b/releases/release-5.1.4.md @@ -110,7 +110,7 @@ TiDB version: 5.1.4 - Fix a bug that the schedule generated by the region scatterer might decrease the number of peers [#4565](https://github.com/tikv/pd/issues/4565) - Fix the issue that Region statistics are not affected by `flow-round-by-digit` [#4295](https://github.com/tikv/pd/issues/4295) - - Fix slow leader election caused by stucked region syncer [#3936](https://github.com/tikv/pd/issues/3936) + - Fix slow leader election caused by stuck region syncer [#3936](https://github.com/tikv/pd/issues/3936) - Support that the evict leader scheduler can schedule regions with unhealthy peers [#4093](https://github.com/tikv/pd/issues/4093) - Fix the issue that the cold hotspot data cannot be deleted from the hotspot statistics [#4390](https://github.com/tikv/pd/issues/4390) - Fix a panic issue that occurs after the TiKV node is removed [#4344](https://github.com/tikv/pd/issues/4344) diff --git a/releases/release-5.2.2.md b/releases/release-5.2.2.md index 4595e7bb8e039..7ba8c354199b9 100644 --- a/releases/release-5.2.2.md +++ b/releases/release-5.2.2.md @@ -63,7 +63,7 @@ TiDB version: 5.2.2 - Fix the `INDEX OUT OF RANGE` error for a MPP query after deleting an empty `dual table`. [#28250](https://github.com/pingcap/tidb/issues/28250) - Fix the issue of false positive error log `invalid cop task execution summaries length` for MPP queries [#1791](https://github.com/pingcap/tics/issues/1791) - Fix the issue of error log `cannot found column in Schema column` for MPP queries [#28149](https://github.com/pingcap/tidb/pull/28149) - - Fix the issue that TiDB might panic when TiFlash is shuting down [#28096](https://github.com/pingcap/tidb/issues/28096) + - Fix the issue that TiDB might panic when TiFlash is shutting down [#28096](https://github.com/pingcap/tidb/issues/28096) - Remove the support for insecure 3DES (Triple Data Encryption Algorithm) based TLS cipher suites [#27859](https://github.com/pingcap/tidb/pull/27859) - Fix the issue that Lightning connects to offline TiKV nodes during pre-check and causes import failures [#27826](https://github.com/pingcap/tidb/pull/27826) - Fix the issue that pre-check cost too much time when importing many files to tables [#27605](https://github.com/pingcap/tidb/issues/27605) diff --git a/releases/release-5.3.0.md b/releases/release-5.3.0.md index a7c026b88a0b1..9014dd337163e 100644 --- a/releases/release-5.3.0.md +++ b/releases/release-5.3.0.md @@ -337,7 +337,7 @@ Starting from TiCDC v5.3.0, the cyclic replication feature between TiDB clusters - Fix the `INDEX OUT OF RANGE` error for a MPP query after deleting an empty `dual table` [#28250](https://github.com/pingcap/tidb/issues/28250) - Fix the issue of false positive error log `invalid cop task execution summaries length` for MPP queries [#1791](https://github.com/pingcap/tics/issues/1791) - Fix the issue of error log `cannot found column in Schema column` for MPP queries [#28149](https://github.com/pingcap/tidb/pull/28149) - - Fix the issue that TiDB might panic when TiFlash is shuting down [#28096](https://github.com/pingcap/tidb/issues/28096) + - Fix the issue that TiDB might panic when TiFlash is shutting down [#28096](https://github.com/pingcap/tidb/issues/28096) - Remove the support for insecure 3DES (Triple Data Encryption Algorithm) based TLS cipher suites [#27859](https://github.com/pingcap/tidb/pull/27859) - Fix the issue that Lightning connects to offline TiKV nodes during pre-check and causes import failures [#27826](https://github.com/pingcap/tidb/pull/27826) - Fix the issue that pre-check cost too much time when importing many files to tables [#27605](https://github.com/pingcap/tidb/issues/27605) @@ -376,7 +376,7 @@ Starting from TiCDC v5.3.0, the cyclic replication feature between TiDB clusters - Fix the issue that the scatter range scheduler cannot schedule empty Regions [#4118](https://github.com/tikv/pd/pull/4118) - Fix the issue that the key manager cost too much CPU [#4071](https://github.com/tikv/pd/issues/4071) - Fix the data race issue that might occur when setting configurations of hot Region scheduler [#4159](https://github.com/tikv/pd/issues/4159) - - Fix slow leader election caused by stucked Region syncer [#3936](https://github.com/tikv/pd/issues/3936) + - Fix slow leader election caused by stuck Region syncer [#3936](https://github.com/tikv/pd/issues/3936) + TiFlash diff --git a/releases/release-6.1.0.md b/releases/release-6.1.0.md index bbed3e39297c0..c612f4d0f0260 100644 --- a/releases/release-6.1.0.md +++ b/releases/release-6.1.0.md @@ -342,7 +342,7 @@ In 6.1.0, the key new features or improvements are as follows: - CDC supports RawKV [#11965](https://github.com/tikv/tikv/issues/11965) - Support splitting a large snapshot file into multiple files [#11595](https://github.com/tikv/tikv/issues/11595) - Move the snapshot garbage collection from Raftstore to background thread to prevent snapshot GC from blocking Raftstore message loops [#11966](https://github.com/tikv/tikv/issues/11966) - - Support dynamic setting of the the maximum message length (`max-grpc-send-msg-len`) and the maximum batch size of gPRC messages (`raft-msg-max-batch-size`) [#12334](https://github.com/tikv/tikv/issues/12334) + - Support dynamic setting of the maximum message length (`max-grpc-send-msg-len`) and the maximum batch size of gPRC messages (`raft-msg-max-batch-size`) [#12334](https://github.com/tikv/tikv/issues/12334) - Support executing online unsafe recovery plan through Raft [#10483](https://github.com/tikv/tikv/issues/10483) + PD diff --git a/releases/release-6.1.1.md b/releases/release-6.1.1.md index 22efd1666a0f8..47aadba78f361 100644 --- a/releases/release-6.1.1.md +++ b/releases/release-6.1.1.md @@ -81,7 +81,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.1/quick-start-with- - Fix the issue that the wrong join reorder in some right outer join scenarios causes wrong query result [#36912](https://github.com/pingcap/tidb/issues/36912) @[winoros](https://github.com/winoros) - Fix the issue of incorrectly inferred null flag of the TiFlash `firstrow` aggregate function in the EqualAll case [#34584](https://github.com/pingcap/tidb/issues/34584) @[fixdb](https://github.com/fixdb) - Fix the issue that Plan Cache does not work when a binding is created with the `IGNORE_PLAN_CACHE` hint [#34596](https://github.com/pingcap/tidb/issues/34596) @[fzzf678](https://github.com/fzzf678) - - Fix the issu that an `EXCHANGE` operator is missing between the hash-partition window and the single-partition window [#35990](https://github.com/pingcap/tidb/issues/35990) @[LittleFall](https://github.com/LittleFall) + - Fix the issue that an `EXCHANGE` operator is missing between the hash-partition window and the single-partition window [#35990](https://github.com/pingcap/tidb/issues/35990) @[LittleFall](https://github.com/LittleFall) - Fix the issue that partitioned tables cannot fully use indexes to scan data in some cases [#33966](https://github.com/pingcap/tidb/issues/33966) @[mjonss](https://github.com/mjonss) - Fix the issue of wrong query result when a wrong default value is set for partial aggregation after the aggregation is pushed down [#35295](https://github.com/pingcap/tidb/issues/35295) @[tiancaiamao](https://github.com/tiancaiamao) - Fix the issue that querying partitioned tables might get the `index-out-of-range` error in some cases [#35181](https://github.com/pingcap/tidb/issues/35181) @[mjonss](https://github.com/mjonss) diff --git a/releases/release-6.3.0.md b/releases/release-6.3.0.md index cea5f127a171d..98ff3bcb881ad 100644 --- a/releases/release-6.3.0.md +++ b/releases/release-6.3.0.md @@ -335,7 +335,7 @@ Since v6.3.0, TiCDC no longer supports configuring Pulsar sink. [kop](https://gi - Fix the issue that the privilege check is skipped for `PREPARE` statements [#35784](https://github.com/pingcap/tidb/issues/35784) @[lcwangchao](https://github.com/lcwangchao) - Fix the issue that the system variable `tidb_enable_noop_variable` can be set to `WARN` [#36647](https://github.com/pingcap/tidb/issues/36647) @[lcwangchao](https://github.com/lcwangchao) - - Fix the issue that when an expression index is defined, the `ORDINAL_POSITION` column of the `INFORMAITON_SCHEMA.COLUMNS` table might be incorrect [#31200](https://github.com/pingcap/tidb/issues/31200) @[bb7133](https://github.com/bb7133) + - Fix the issue that when an expression index is defined, the `ORDINAL_POSITION` column of the `INFORMATION_SCHEMA.COLUMNS` table might be incorrect [#31200](https://github.com/pingcap/tidb/issues/31200) @[bb7133](https://github.com/bb7133) - Fix the issue that TiDB does not report an error when the timestamp is larger than `MAXINT32` [#31585](https://github.com/pingcap/tidb/issues/31585) @[bb7133](https://github.com/bb7133) - Fix the issue that TiDB server cannot be started when the enterprise plugin is used [#37319](https://github.com/pingcap/tidb/issues/37319) @[xhebox](https://github.com/xhebox) - Fix the incorrect output of `SHOW CREATE PLACEMENT POLICY` [#37526](https://github.com/pingcap/tidb/issues/37526) @[xhebox](https://github.com/xhebox) @@ -353,7 +353,7 @@ Since v6.3.0, TiCDC no longer supports configuring Pulsar sink. [kop](https://gi - Fix the issue that the cast and comparison between binary strings and JSON in TiDB are incompatible with MySQL [#31918](https://github.com/pingcap/tidb/issues/31918) [#25053](https://github.com/pingcap/tidb/issues/25053) @[YangKeao](https://github.com/YangKeao) - Fix the issue that `JSON_OBJECTAGG` and `JSON_ARRAYAGG` in TiDB are not compatible with MySQL on binary values [#25053](https://github.com/pingcap/tidb/issues/25053) @[YangKeao](https://github.com/YangKeao) - Fix the issue that the comparison between JSON opaque values causes panic [#37315](https://github.com/pingcap/tidb/issues/37315) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that the single precision float cannot be used in JSON aggregation funtions [#37287](https://github.com/pingcap/tidb/issues/37287) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that the single precision float cannot be used in JSON aggregation functions [#37287](https://github.com/pingcap/tidb/issues/37287) @[YangKeao](https://github.com/YangKeao) - Fix the issue that the `UNION` operator might return unexpected empty result [#36903](https://github.com/pingcap/tidb/issues/36903) @[tiancaiamao](https://github.com/tiancaiamao) - Fix the issue that the result of the `castRealAsTime` expression is inconsistent with MySQL [#37462](https://github.com/pingcap/tidb/issues/37462) @[mengxin9014](https://github.com/mengxin9014) - Fix the issue that pessimistic DML operations lock non-unique index keys [#36235](https://github.com/pingcap/tidb/issues/36235) @[ekexium](https://github.com/ekexium) diff --git a/releases/release-6.4.0.md b/releases/release-6.4.0.md index 3c21330e33ae2..f10166faea90c 100644 --- a/releases/release-6.4.0.md +++ b/releases/release-6.4.0.md @@ -168,7 +168,7 @@ In v6.4.0-DMR, the key new features and improvements are as follows: * Be compatible with the Linear Hash partitioning syntax [#38450](https://github.com/pingcap/tidb/issues/38450) @[mjonss](https://github.com/mjonss) - In the earlier version, TiDB has supported the Hash, Range, and List partitioning. Starting from v6.4.0, TiDB can also be compatible with the syntaxt of [MySQL Linear Hash partitioning](https://dev.mysql.com/doc/refman/5.7/en/partitioning-linear-hash.html). + In the earlier version, TiDB has supported the Hash, Range, and List partitioning. Starting from v6.4.0, TiDB can also be compatible with the syntax of [MySQL Linear Hash partitioning](https://dev.mysql.com/doc/refman/5.7/en/partitioning-linear-hash.html). In TiDB, you can execute the existing DDL statements of your MySQL Linear Hash partitions directly, and TiDB will create the corresponding Hash partition tables (note that there is no Linear Hash partition inside TiDB). You can also execute the existing DML statements of your MySQL Linear Hash partitions directly, and TiDB will return the query result of the corresponding TiDB Hash partitions normally. This feature ensures the TiDB syntax compatibility with MySQL Linear Hash partitions and facilitates seamless migration from MySQL-based applications to TiDB. @@ -340,7 +340,7 @@ In v6.4.0-DMR, the key new features and improvements are as follows: - Add a new configuration item `apply-yield-write-size` to control the maximum number of bytes that the Apply thread can write for one Finite-state Machine in one round of poll, and relieve Raftstore congestion when the Apply thread writes a large volume of data [#13313](https://github.com/tikv/tikv/issues/13313) @[glorv](https://github.com/glorv) - Warm up the entry cache before migrating the leader of Region to avoid QPS jitter during the leader transfer process [#13060](https://github.com/tikv/tikv/issues/13060) @[cosven](https://github.com/cosven) - - Support pushing down the `json_constains` operator to Coprocessor [#13592](https://github.com/tikv/tikv/issues/13592) @[lizhenhuan](https://github.com/lizhenhuan) + - Support pushing down the `json_constrains` operator to Coprocessor [#13592](https://github.com/tikv/tikv/issues/13592) @[lizhenhuan](https://github.com/lizhenhuan) - Add the asynchronous function for `CausalTsProvider` to improve the flush performance in some scenarios [#13428](https://github.com/tikv/tikv/issues/13428) @[zeminzhou](https://github.com/zeminzhou) + PD diff --git a/releases/release-6.5.1.md b/releases/release-6.5.1.md index 2708883471578..b0febb93ef432 100644 --- a/releases/release-6.5.1.md +++ b/releases/release-6.5.1.md @@ -50,7 +50,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with- - Support starting TiKV on a CPU with less than 1 core [#13586](https://github.com/tikv/tikv/issues/13586) [#13752](https://github.com/tikv/tikv/issues/13752) [#14017](https://github.com/tikv/tikv/issues/14017) @[andreid-db](https://github.com/andreid-db) - Increase the thread limit of the Unified Read Pool (`readpool.unified.max-thread-count`) to 10 times the CPU quota, to better handle high-concurrency queries [#13690](https://github.com/tikv/tikv/issues/13690) @[v01dstar](https://github.com/v01dstar) - - Change the the default value of `resolved-ts.advance-ts-interval` from `"1s"` to `"20s"`, to reduce cross-region traffic [#14100](https://github.com/tikv/tikv/issues/14100) @[overvenus](https://github.com/overvenus) + - Change the default value of `resolved-ts.advance-ts-interval` from `"1s"` to `"20s"`, to reduce cross-region traffic [#14100](https://github.com/tikv/tikv/issues/14100) @[overvenus](https://github.com/overvenus) + TiFlash diff --git a/releases/release-6.5.4.md b/releases/release-6.5.4.md index 9ab3f36f70769..0c2667d5eb67f 100644 --- a/releases/release-6.5.4.md +++ b/releases/release-6.5.4.md @@ -112,7 +112,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with- - Fix the issue that killing a connection might cause go coroutine leaks [#46034](https://github.com/pingcap/tidb/issues/46034) @[pingyu](https://github.com/pingyu) - Fix the issue that the `tmp-storage-quota` configuration does not take effect [#45161](https://github.com/pingcap/tidb/issues/45161) [#26806](https://github.com/pingcap/tidb/issues/26806) @[wshwsh12](https://github.com/wshwsh12) - Fix the issue that TiFlash replicas might be unavailable when a TiFlash node is down in the cluster [#38484](https://github.com/pingcap/tidb/issues/38484) @[hehechen](https://github.com/hehechen) - - Fix the issue that TiDB crashes due to possible data race when reading and writing `Config.Lables` concurrently [#45561](https://github.com/pingcap/tidb/issues/45561) @[genliqi](https://github.com/gengliqi) + - Fix the issue that TiDB crashes due to possible data race when reading and writing `Config.Labels` concurrently [#45561](https://github.com/pingcap/tidb/issues/45561) @[genliqi](https://github.com/gengliqi) - Fix the issue that the client-go regularly updating `min-resolved-ts` might cause PD OOM when the cluster is large [#46664](https://github.com/pingcap/tidb/issues/46664) @[HuSharp](https://github.com/HuSharp) + TiKV @@ -132,7 +132,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with- - Fix the issue that when etcd is already started but the client has not yet connected to it, calling the client might cause PD to panic [#6860](https://github.com/tikv/pd/issues/6860) @[HuSharp](https://github.com/HuSharp) - Fix the issue that a leader cannot exit for a long time [#6918](https://github.com/tikv/pd/issues/6918) @[bufferflies](https://github.com/bufferflies) - - Fix the issue that when the placement rule uses `LOCATION_LABLES`, SQL and the Rule Checker are not compatible [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) + - Fix the issue that when the placement rule uses `LOCATION_LABELS`, SQL and the Rule Checker are not compatible [#38605](https://github.com/pingcap/tidb/issues/38605) @[nolouch](https://github.com/nolouch) - Fix the issue that PD might unexpectedly add multiple Learners to a Region [#5786](https://github.com/tikv/pd/issues/5786) @[HunDunDM](https://github.com/HunDunDM) - Fix the issue that unhealthy peers cannot be removed when rule checker selects peers [#6559](https://github.com/tikv/pd/issues/6559) @[nolouch](https://github.com/nolouch) - Fix the issue that failed learner peers in `unsafe recovery` are ignored in `auto-detect` mode [#6690](https://github.com/tikv/pd/issues/6690) @[v01dstar](https://github.com/v01dstar) diff --git a/releases/release-6.6.0.md b/releases/release-6.6.0.md index a6bea929368a5..1a234a695f5dd 100644 --- a/releases/release-6.6.0.md +++ b/releases/release-6.6.0.md @@ -518,7 +518,7 @@ In v6.6.0-DMR, the key new features and improvements are as follows: - Fix the issue that resources are not released when disabling the resource management module [#40546](https://github.com/pingcap/tidb/issues/40546) @[zimulala](https://github.com/zimulala) - Fix the issue that TTL tasks cannot trigger statistics updates in time [#40109](https://github.com/pingcap/tidb/issues/40109) @[YangKeao](https://github.com/YangKeao) - Fix the issue that unexpected data is read because TiDB improperly handles `NULL` values when constructing key ranges [#40158](https://github.com/pingcap/tidb/issues/40158) @[tiancaiamao](https://github.com/tiancaiamao) - - Fix the issue that illegal values are written to a table when the `MODIFT COLUMN` statement also changes the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that illegal values are written to a table when the `MODIFY COLUMN` statement also changes the default value of a column [#40164](https://github.com/pingcap/tidb/issues/40164) @[wjhuang2016](https://github.com/wjhuang2016) - Fix the issue that the adding index operation is inefficient due to invalid Region cache when there are many Regions in a table [#38436](https://github.com/pingcap/tidb/issues/38436) @[tangenta](https://github.com/tangenta) - Fix data race occurred in allocating auto-increment IDs [#40584](https://github.com/pingcap/tidb/issues/40584) @[Dousir9](https://github.com/Dousir9) - Fix the issue that the implementation of the not operator in JSON is incompatible with the implementation in MySQL [#40683](https://github.com/pingcap/tidb/issues/40683) @[YangKeao](https://github.com/YangKeao) diff --git a/releases/release-7.1.3.md b/releases/release-7.1.3.md index 006742a2eb4e4..0f8e4ddcfc3b5 100644 --- a/releases/release-7.1.3.md +++ b/releases/release-7.1.3.md @@ -96,7 +96,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v7.1/quick-start-with- - Fix the issue that TiKV reports the `ServerIsBusy` error because it can not append the raft log [#15800](https://github.com/tikv/tikv/issues/15800) @[tonyxuqqi](https://github.com/tonyxuqqi) - Fix the issue that snapshot restore might get stuck when BR crashes [#15684](https://github.com/tikv/tikv/issues/15684) @[YuJuncen](https://github.com/YuJuncen) - Fix the issue that Resolved TS in stale read might cause TiKV OOM issues when tracking large transactions [#14864](https://github.com/tikv/tikv/issues/14864) @[overvenus](https://github.com/overvenus) - - Fix the issue that damaged SST files might be spreaded to other TiKV nodes [#15986](https://github.com/tikv/tikv/issues/15986) @[Connor1996](https://github.com/Connor1996) + - Fix the issue that damaged SST files might be spread to other TiKV nodes [#15986](https://github.com/tikv/tikv/issues/15986) @[Connor1996](https://github.com/Connor1996) - Fix the issue that the joint state of DR Auto-Sync might time out when scaling out [#15817](https://github.com/tikv/tikv/issues/15817) @[Connor1996](https://github.com/Connor1996) - Fix the issue that the scheduler command variables are incorrect in Grafana on the cloud environment [#15832](https://github.com/tikv/tikv/issues/15832) @[Connor1996](https://github.com/Connor1996) - Fix the issue that stale peers are retained and block resolved-ts after Regions are merged [#15919](https://github.com/tikv/tikv/issues/15919) @[overvenus](https://github.com/overvenus) diff --git a/replicate-data-to-kafka.md b/replicate-data-to-kafka.md index 7dd7bf3d3a722..1cf94d1c41252 100644 --- a/replicate-data-to-kafka.md +++ b/replicate-data-to-kafka.md @@ -31,7 +31,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl 2. Create a Kafka cluster. - - Lab environment: refer to [Apache Kakfa Quickstart](https://kafka.apache.org/quickstart) to start a Kafka cluster. + - Lab environment: refer to [Apache Kafka Quickstart](https://kafka.apache.org/quickstart) to start a Kafka cluster. - Production environment: refer to [Running Kafka in Production](https://docs.confluent.io/platform/current/kafka/deployment.html) to deploy a Kafka production cluster. 3. (Optional) Create a Flink cluster. diff --git a/runtime-filter.md b/runtime-filter.md index c4d01012a35d2..1d2031322929f 100644 --- a/runtime-filter.md +++ b/runtime-filter.md @@ -67,7 +67,7 @@ The execution process of Runtime Filter is as follows: | | filter data | | | | +-----+----v------+ +-------+--------+ - | TableFullScan | | TabelFullScan | + | TableFullScan | | TableFullScan | | store_sales | | date_dim | +-----------------+ +----------------+ ``` diff --git a/security-compatibility-with-mysql.md b/security-compatibility-with-mysql.md index ba6a5b0972cc8..c120a20188f6d 100644 --- a/security-compatibility-with-mysql.md +++ b/security-compatibility-with-mysql.md @@ -143,7 +143,7 @@ The support for TLS authentication is configured differently. For detailed infor ### `tidb_auth_token` -`tidb_auth_token` is a passwordless authentication method based on [JSON Web Token (JWT)](https://datatracker.ietf.org/doc/html/rfc7519). In v6.4.0, `tidb_auth_token` is only used for user authentication in TiDB Cloud. Starting from v6.5.0, you can also configure `tidb_auth_token` as a user authentication method for TiDB Self-Hosted. Different from password-based authentication methods such as `mysql_native_passsword` and `caching_sha2_password`, when you create users using `tidb_auth_token`, there is no need to set or store custom passwords. To log into TiDB, users only need to use a signed token instead of a password, which simplifies the authentication process and improves security. +`tidb_auth_token` is a passwordless authentication method based on [JSON Web Token (JWT)](https://datatracker.ietf.org/doc/html/rfc7519). In v6.4.0, `tidb_auth_token` is only used for user authentication in TiDB Cloud. Starting from v6.5.0, you can also configure `tidb_auth_token` as a user authentication method for TiDB Self-Hosted. Different from password-based authentication methods such as `mysql_native_password` and `caching_sha2_password`, when you create users using `tidb_auth_token`, there is no need to set or store custom passwords. To log into TiDB, users only need to use a signed token instead of a password, which simplifies the authentication process and improves security. #### JWT diff --git a/sql-plan-replayer.md b/sql-plan-replayer.md index d88f39f120708..82d9bf703fde2 100644 --- a/sql-plan-replayer.md +++ b/sql-plan-replayer.md @@ -31,7 +31,7 @@ Based on `sql-statement`, TiDB sorts out and exports the following on-site infor - The table schema in `sql-statement` - The statistics of the table in `sql-statement` - The result of `EXPLAIN [ANALYZE] sql-statement` -- Some internal procudures of query optimization +- Some internal procedures of query optimization If historical statistics are [enabled](/system-variables.md#tidb_enable_historical_stats), you can specify a time in the `PLAN REPLAYER` statement to get the historical statistics for the corresponding time. You can directly specify a time and date or specify a timestamp. TiDB looks for the historical statistics before the specified time and exports the latest one among them. diff --git a/sql-statements/sql-statement-alter-index.md b/sql-statements/sql-statement-alter-index.md index f729fb3ef24d7..d060bc1a31a1c 100644 --- a/sql-statements/sql-statement-alter-index.md +++ b/sql-statements/sql-statement-alter-index.md @@ -118,7 +118,7 @@ Query OK, 0 rows affected (0.02 sec) ## MySQL compatibility * Invisible indexes in TiDB are modeled on the equivalent feature from MySQL 8.0. -* Similiar to MySQL, TiDB does not permit `PRIMARY KEY` indexes to be made invisible. +* Similar to MySQL, TiDB does not permit `PRIMARY KEY` indexes to be made invisible. ## See also diff --git a/sql-statements/sql-statement-explain-analyze.md b/sql-statements/sql-statement-explain-analyze.md index 462b0c8d84ec9..fc455434d9c09 100644 --- a/sql-statements/sql-statement-explain-analyze.md +++ b/sql-statements/sql-statement-explain-analyze.md @@ -34,7 +34,7 @@ ExplainableStmt ::= ## EXPLAIN ANALYZE output format -Different from `EXPLAIN`, `EXPLAIN ANALYZE` executes the corresponding SQL statement, records its runtime information, and returns the information together with the execution plan. Therefore, you can regard `EXPLAIN ANALYZE` as an extension of the `EXPLAIN` statement. Compared to `EXPLAIN` (for debugging query exeuction), the return results of `EXPLAIN ANALYZE` also include columns of information such as `actRows`, `execution info`, `memory`, and `disk`. The details of these columns are shown as follows: +Different from `EXPLAIN`, `EXPLAIN ANALYZE` executes the corresponding SQL statement, records its runtime information, and returns the information together with the execution plan. Therefore, you can regard `EXPLAIN ANALYZE` as an extension of the `EXPLAIN` statement. Compared to `EXPLAIN` (for debugging query execution), the return results of `EXPLAIN ANALYZE` also include columns of information such as `actRows`, `execution info`, `memory`, and `disk`. The details of these columns are shown as follows: | attribute name | description | |:----------------|:---------------------------------| diff --git a/sql-statements/sql-statement-explain.md b/sql-statements/sql-statement-explain.md index e99e5b1d12598..5a872c6ffc28a 100644 --- a/sql-statements/sql-statement-explain.md +++ b/sql-statements/sql-statement-explain.md @@ -336,7 +336,7 @@ In the output, `id`, `estRows`, `taskType`, `accessObject`, and `operatorInfo` h ## MySQL compatibility -* Both the format of `EXPLAIN` and the potential execution plans in TiDB differ substaintially from MySQL. +* Both the format of `EXPLAIN` and the potential execution plans in TiDB differ substantially from MySQL. * TiDB does not support the `FORMAT=JSON` or `FORMAT=TREE` options. * `FORMAT=tidb_json` in TiDB is the JSON format output of the default `EXPLAIN` result. The format and fields are different from the `FORMAT=JSON` output in MySQL. diff --git a/system-variables.md b/system-variables.md index 7028a465eebd3..1f897ea2d412d 100644 --- a/system-variables.md +++ b/system-variables.md @@ -5232,10 +5232,10 @@ Query OK, 0 rows affected, 1 warning (0.00 sec) - Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): No - Type: Boolean - Default value: `ON` -- When accesing a partitioned table in [dynamic pruning mode](/partitioned-table.md#dynamic-pruning-mode), TiDB aggregates the statistics of each partition to generate GlobalStats. This variable controls the generation of GlobalStats when partition statistics are missing. +- When accessing a partitioned table in [dynamic pruning mode](/partitioned-table.md#dynamic-pruning-mode), TiDB aggregates the statistics of each partition to generate GlobalStats. This variable controls the generation of GlobalStats when partition statistics are missing. - If this variable is `ON`, TiDB skips missing partition statistics when generating GlobalStats so the generation of GlobalStats is not affected. - - If this variable is `OFF`, TiDB stops generating GloablStats when it detects any missing partition statistics. + - If this variable is `OFF`, TiDB stops generating GlobalStats when it detects any missing partition statistics. ### tidb_skip_utf8_check diff --git a/ticdc/monitor-ticdc.md b/ticdc/monitor-ticdc.md index 66feb10fcd071..08f355e63f8bf 100644 --- a/ticdc/monitor-ticdc.md +++ b/ticdc/monitor-ticdc.md @@ -97,8 +97,8 @@ The description of each metric in the **Events** panel is as follows: - Entry sorter sort duration percentile: The time (P95, P99, and P999) spent by TiCDC sorting events within one second - Entry sorter merge duration: The histogram of the time spent by TiCDC nodes merging sorted events - Entry sorter merge duration percentile: The time (P95, P99, and P999) spent by TiCDC merging sorted events within one second -- Mounter unmarshal duration: The histogram of the time spent by TiCDC nodes unmarshaling events -- Mounter unmarshal duration percentile: The time (P95, P99, and P999) spent by TiCDC unmarshaling events within one second +- Mounter unmarshal duration: The histogram of the time spent by TiCDC nodes unmarshalling events +- Mounter unmarshal duration percentile: The time (P95, P99, and P999) spent by TiCDC unmarshalling events within one second - KV client dispatch events/s: The number of events that the KV client module dispatches among the TiCDC nodes - KV client batch resolved size: The batch size of resolved timestamp messages that TiKV sends to TiCDC diff --git a/ticdc/ticdc-faq.md b/ticdc/ticdc-faq.md index 058dd245998ad..474ca3616cb7e 100644 --- a/ticdc/ticdc-faq.md +++ b/ticdc/ticdc-faq.md @@ -136,7 +136,7 @@ For more information, refer to [TiCDC changefeed configurations](/ticdc/ticdc-ch ## When TiCDC replicates data to Kafka, can I control the maximum size of a single message in TiDB? -When `protocol` is set to `avro` or `canal-json`, messages are sent per row change. A single Kafka message contains only one row change and is generally no larger than Kafka's limit. Therefore, there is no need to limit the size of a single message. If the size of a single Kafka message does exceed Kakfa's limit, refer to [Why does the latency from TiCDC to Kafka become higher and higher?](/ticdc/ticdc-faq.md#why-does-the-latency-from-ticdc-to-kafka-become-higher-and-higher). +When `protocol` is set to `avro` or `canal-json`, messages are sent per row change. A single Kafka message contains only one row change and is generally no larger than Kafka's limit. Therefore, there is no need to limit the size of a single message. If the size of a single Kafka message does exceed Kafka's limit, refer to [Why does the latency from TiCDC to Kafka become higher and higher?](/ticdc/ticdc-faq.md#why-does-the-latency-from-ticdc-to-kafka-become-higher-and-higher). When `protocol` is set to `open-protocol`, messages are sent in batches. Therefore, one Kafka message might be excessively large. To avoid this situation, you can configure the `max-message-bytes` parameter to control the maximum size of data sent to the Kafka broker each time (optional, `10MB` by default). You can also configure the `max-batch-size` parameter (optional, `16` by default) to specify the maximum number of change records in each Kafka message. diff --git a/ticdc/ticdc-manage-changefeed.md b/ticdc/ticdc-manage-changefeed.md index 7c4695d4ed9fc..8169d319b669d 100644 --- a/ticdc/ticdc-manage-changefeed.md +++ b/ticdc/ticdc-manage-changefeed.md @@ -144,7 +144,7 @@ In the preceding command and result: - `checkpoint-ts`: The largest transaction `TS` in the current `changefeed`. Note that this `TS` has been successfully written to the downstream. - `admin-job-type`: The status of a changefeed: - `0`: The state is normal. - - `1`: The task is paused. When the task is paused, all replicated `processor`s exit. The configuration and the replication status of the task are retained, so you can resume the task from `checkpiont-ts`. + - `1`: The task is paused. When the task is paused, all replicated `processor`s exit. The configuration and the replication status of the task are retained, so you can resume the task from `checkpoint-ts`. - `2`: The task is resumed. The replication task resumes from `checkpoint-ts`. - `3`: The task is removed. When the task is removed, all replicated `processor`s are ended, and the configuration information of the replication task is cleared up. Only the replication status is retained for later queries. - `task-status` indicates the state of each replication sub-task in the queried changefeed. diff --git a/ticdc/ticdc-open-api.md b/ticdc/ticdc-open-api.md index 331528f7d31e9..789baa924a791 100644 --- a/ticdc/ticdc-open-api.md +++ b/ticdc/ticdc-open-api.md @@ -150,7 +150,7 @@ The configuration parameters of sink are as follows: {"matcher":["test1.*", "test2.*"], "dispatcher":"ts"}, {"matcher":["test3.*", "test4.*"], "dispatcher":"index-value"} ], - "protocal":"canal-json" + "protocol":"canal-json" } ``` diff --git a/tidb-cloud/data-service-oas-with-nextjs.md b/tidb-cloud/data-service-oas-with-nextjs.md index d1d3b4b3e20b3..d2a9c5a741a3c 100644 --- a/tidb-cloud/data-service-oas-with-nextjs.md +++ b/tidb-cloud/data-service-oas-with-nextjs.md @@ -182,7 +182,7 @@ You can use the generated client code to develop your Next.js application. > **Note:** > - > If the linked clusters of your Data App are hosted in different regions, you wil see multiple items in the `servers` section of the downloaded OpenAPI Specification file. In this case, you also need to configure the endpoint path in the `config` object as follows: + > If the linked clusters of your Data App are hosted in different regions, you will see multiple items in the `servers` section of the downloaded OpenAPI Specification file. In this case, you also need to configure the endpoint path in the `config` object as follows: > > ```js > const config = new Configuration({ diff --git a/tidb-cloud/import-snapshot-files.md b/tidb-cloud/import-snapshot-files.md index 9d0e01d63d7d8..e31daac06fc85 100644 --- a/tidb-cloud/import-snapshot-files.md +++ b/tidb-cloud/import-snapshot-files.md @@ -7,4 +7,4 @@ summary: Learn how to import Amazon Aurora or RDS for MySQL snapshot files into You can import Amazon Aurora or RDS for MySQL snapshot files into TiDB Cloud. Note that all source data files with the `.parquet` suffix in the `{db_name}.{table_name}/` folder must conform to the [naming convention](/tidb-cloud/naming-conventions-for-data-import.md). -The process of importing snapshot files is similiar to that of importing Parquet files. For more information, see [Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud](/tidb-cloud/import-parquet-files.md). +The process of importing snapshot files is similar to that of importing Parquet files. For more information, see [Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud](/tidb-cloud/import-parquet-files.md). diff --git a/tidb-cloud/migrate-from-op-tidb.md b/tidb-cloud/migrate-from-op-tidb.md index ce2f4f685e13e..f96f59668f203 100644 --- a/tidb-cloud/migrate-from-op-tidb.md +++ b/tidb-cloud/migrate-from-op-tidb.md @@ -187,7 +187,7 @@ Do the following to export data from the upstream TiDB cluster to Amazon S3 usin The `-t` option specifies the number of threads for the export. Increasing the number of threads improves the concurrency of Dumpling and the export speed, and also increases the database's memory consumption. Therefore, do not set a too large number for this parameter. - For mor information, see [Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview#export-to-sql-files). + For more information, see [Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview#export-to-sql-files). 4. Check the export data. Usually the exported data includes the following: diff --git a/tidb-cloud/notification-2023-09-26-console-maintenance.md b/tidb-cloud/notification-2023-09-26-console-maintenance.md index e85b29135c439..5e00966b1da98 100644 --- a/tidb-cloud/notification-2023-09-26-console-maintenance.md +++ b/tidb-cloud/notification-2023-09-26-console-maintenance.md @@ -20,7 +20,7 @@ This notification describes the details that you need to know about the [TiDB Cl ## Reason for maintenance -We're upgrading the management infrastucture of the TiDB Cloud Serverless to enhance performance and efficiency, delivering a better experience for all users. This is part of our ongoing commitment to providing high-quality services. +We're upgrading the management infrastructure of the TiDB Cloud Serverless to enhance performance and efficiency, delivering a better experience for all users. This is part of our ongoing commitment to providing high-quality services. ## Impact diff --git a/tidb-cloud/serverless-driver-node-example.md b/tidb-cloud/serverless-driver-node-example.md index 926bfdd562564..b6a9fb7509012 100644 --- a/tidb-cloud/serverless-driver-node-example.md +++ b/tidb-cloud/serverless-driver-node-example.md @@ -75,7 +75,7 @@ The serverless driver supports both CommonJS and ES modules. The following steps node index.js ``` -## Compatability with earlier versions of Node.js +## Compatibility with earlier versions of Node.js If you are using Node.js earlier than 18.0.0, which does not have a global `fetch` function, you can take the following steps to get `fetch`: diff --git a/tidb-cloud/terraform-use-cluster-resource.md b/tidb-cloud/terraform-use-cluster-resource.md index 48104f6e4addc..1f223d17ce471 100644 --- a/tidb-cloud/terraform-use-cluster-resource.md +++ b/tidb-cloud/terraform-use-cluster-resource.md @@ -604,7 +604,7 @@ You can scale a TiDB cluster when its status is `AVAILABLE`. 1. In the `cluster.tf` file that is used when you [create the cluster](#create-a-cluster-using-the-cluster-resource), edit the `components` configurations. - For example, to add one more node for TiDB, 3 more nodes for TiKV (The number of TiKV nodes needs to be a multiple of 3 for its step is 3. You can [get this information from the cluster specifcation](#get-cluster-specification-information-using-the-tidbcloud_cluster_specs-data-source)), and one more node for TiFlash, you can edit the configurations as follows: + For example, to add one more node for TiDB, 3 more nodes for TiKV (The number of TiKV nodes needs to be a multiple of 3 for its step is 3. You can [get this information from the cluster specification](#get-cluster-specification-information-using-the-tidbcloud_cluster_specs-data-source)), and one more node for TiFlash, you can edit the configurations as follows: ``` components = { diff --git a/tidb-cloud/tidb-cloud-import-local-files.md b/tidb-cloud/tidb-cloud-import-local-files.md index ed20166acf53f..89c0e514a1a7c 100644 --- a/tidb-cloud/tidb-cloud-import-local-files.md +++ b/tidb-cloud/tidb-cloud-import-local-files.md @@ -18,7 +18,7 @@ Currently, this method supports importing one CSV file for one task into either - If the extra columns are not the primary keys or the unique keys, no error will be reported. Instead, these extra columns will be populated with their [default values](/data-type-default-values.md). - If the extra columns are the primary keys or the unique keys and do not have the `auto_increment` or `auto_random` attribute, an error will be reported. In that case, it is recommended that you choose one of the following strategies: - Provide a source file that includes these the primary keys or the unique keys columns. - - Set the attributes of the the primary key or the unique key columns to `auto_increment` or `auto_random`. + - Set the attributes of the primary key or the unique key columns to `auto_increment` or `auto_random`. - If a column name is a reserved [keyword](/keywords.md) in TiDB, TiDB Cloud automatically adds backticks `` ` `` to enclose the column name. For example, if the column name is `order`, TiDB Cloud automatically adds backticks `` ` `` to change it to `` `order` `` and imports the data into the target table. ## Import local files diff --git a/tidb-configuration-file.md b/tidb-configuration-file.md index b9d2bba43327f..cbcf54f60e08c 100644 --- a/tidb-configuration-file.md +++ b/tidb-configuration-file.md @@ -513,7 +513,7 @@ Configuration items related to performance. - In a single transaction, the total size of key-value records cannot exceed this value. The maximum value of this parameter is `1099511627776` (1 TB). Note that if you have used the binlog to serve the downstream consumer Kafka (such as the `arbiter` cluster), the value of this parameter must be no more than `1073741824` (1 GB). This is because 1 GB is the upper limit of a single message size that Kafka can process. Otherwise, an error is returned if this limit is exceeded. - In TiDB v6.5.0 and later versions, this configuration is no longer recommended. The memory size of a transaction will be accumulated into the memory usage of the session, and the [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) variable will take effect when the session memory threshold is exceeded. To be compatible with previous versions, this configuration works as follows when you upgrade from an earlier version to TiDB v6.5.0 or later: - If this configuration is not set or is set to the default value (`104857600`), after an upgrade, the memory size of a transaction will be accumulated into the memory usage of the session, and the `tidb_mem_quota_query` variable will take effect. - - If this configuration is not defaulted (`104857600`), it still takes effect and its behavior on controling the size of a single transaction remains unchanged before and after the upgrade. This means that the memory size of the transaction is not controlled by the `tidb_mem_quota_query` variable. + - If this configuration is not defaulted (`104857600`), it still takes effect and its behavior on controlling the size of a single transaction remains unchanged before and after the upgrade. This means that the memory size of the transaction is not controlled by the `tidb_mem_quota_query` variable. ### `tcp-keep-alive` diff --git a/tiflash/tiflash-spill-disk.md b/tiflash/tiflash-spill-disk.md index d1809951aac59..00d22eaf80006 100644 --- a/tiflash/tiflash-spill-disk.md +++ b/tiflash/tiflash-spill-disk.md @@ -17,8 +17,8 @@ Starting from v7.0.0, TiFlash supports spilling intermediate data to disk to rel TiFlash provides two triggering mechanisms for spilling data to disk. -* Operator-level spilling: by specifing the data spilling threshold for each operator, you can control when TiFlash spills data of that operator to disk. -* Query-level spilling: by specifing the maximum memory usage of a query on a TiFlash node and the memory ratio for spilling, you can control when TiFlash spills data of supported operators in a query to disk as needed. +* Operator-level spilling: by specifying the data spilling threshold for each operator, you can control when TiFlash spills data of that operator to disk. +* Query-level spilling: by specifying the maximum memory usage of a query on a TiFlash node and the memory ratio for spilling, you can control when TiFlash spills data of supported operators in a query to disk as needed. ### Operator-level spilling diff --git a/tiflash/tune-tiflash-performance.md b/tiflash/tune-tiflash-performance.md index 8c19bf7d96d07..c2ef222a5327f 100644 --- a/tiflash/tune-tiflash-performance.md +++ b/tiflash/tune-tiflash-performance.md @@ -230,7 +230,7 @@ ALTER TABLE employees COMPACT PARTITION pNorth, pEast TIFLASH REPLICA; ### Replace Shuffled Hash Join with Broadcast Hash Join -For `Join` operations with small tables, the Broadcast Hash Join algorithm can avoid transfering large tables, thereby improving the computing performance. +For `Join` operations with small tables, the Broadcast Hash Join algorithm can avoid transferring large tables, thereby improving the computing performance. - The [`tidb_broadcast_join_threshold_size`](/system-variables.md#tidb_broadcast_join_threshold_size-new-in-v50) variable controls whether to use the Broadcast Hash Join algorithm. If the table size (unit: byte) is smaller than the value of this variable, the Broadcast Hash Join algorithm is used. Otherwise, the Shuffled Hash Join algorithm is used. @@ -244,7 +244,7 @@ For `Join` operations with small tables, the Broadcast Hash Join algorithm can a set @@tidb_broadcast_join_threshold_count = 100000; ``` -The following example shows the query result before and after `tidb_broadcast_join_threshold_size` is re-configured. Before the re-configuration, the `ExchangeType` of `ExchangeSender_29` is `HashPartition`. After the value of this variable chages to `10000000`, the `ExchangeType` of `ExchangeSender_29` changes to `Broadcast`. +The following example shows the query result before and after `tidb_broadcast_join_threshold_size` is re-configured. Before the re-configuration, the `ExchangeType` of `ExchangeSender_29` is `HashPartition`. After the value of this variable changes to `10000000`, the `ExchangeType` of `ExchangeSender_29` changes to `Broadcast`. Before `tidb_broadcast_join_threshold_size` is re-configured: diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index 88a604cdc243b..bdb825bc70662 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -37,7 +37,7 @@ This document only describes the parameters that are not included in command-lin ### `slow-log-threshold` -+ The threshold for outputing slow logs. If the processing time is longer than this threshold, slow logs are output. ++ The threshold for outputting slow logs. If the processing time is longer than this threshold, slow logs are output. + Default value: `"1s"` ### `memory-usage-limit` diff --git a/time-to-live.md b/time-to-live.md index 6b3e74b200c98..e5c38b83d77aa 100644 --- a/time-to-live.md +++ b/time-to-live.md @@ -198,7 +198,7 @@ In addition, TiDB provides three tables to obtain more information about TTL job 1 row in set (0.040 sec) ``` - The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. If the table is not a partitioned table, the two IDs are the same. + The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `information_schema.tables`. If the table is not a partitioned table, the two IDs are the same. The columns `{last, current}_job_{start_time, finish_time, ttl_expire}` describe respectively the start time, finish time, and expiration time used by the TTL job of the last or current execution. The `last_job_summary` column describes the execution status of the last TTL task, including the total number of rows, the number of successful rows, and the number of failed rows. @@ -224,7 +224,7 @@ In addition, TiDB provides three tables to obtain more information about TTL job status: finished ``` - The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. `table_schema`, `table_name`, and `partition_name` correspond to the database, table name, and partition name. `create_time`, `finish_time`, and `ttl_expire` indicate the creation time, end time, and expiration time of the TTL task. `expired_rows` and `deleted_rows` indicate the number of expired rows and the number of rows deleted successfully. + The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `information_schema.tables`. `table_schema`, `table_name`, and `partition_name` correspond to the database, table name, and partition name. `create_time`, `finish_time`, and `ttl_expire` indicate the creation time, end time, and expiration time of the TTL task. `expired_rows` and `deleted_rows` indicate the number of expired rows and the number of rows deleted successfully. ## Compatibility with TiDB tools diff --git a/tiproxy/tiproxy-command-line-flags.md b/tiproxy/tiproxy-command-line-flags.md index 76099961b68a7..072decc43b016 100644 --- a/tiproxy/tiproxy-command-line-flags.md +++ b/tiproxy/tiproxy-command-line-flags.md @@ -42,7 +42,7 @@ This section lists the flags of the client program `tiproxyctl`. ### `--curls` + Specifies the server addresses. You can add multiple listening addresses. -+ Type: `comma seperated lists of ip:port` ++ Type: `comma separated lists of ip:port` + Default: `localhost:3080` + Server API gateway addresses. diff --git a/tispark-overview.md b/tispark-overview.md index 8323afd2c3740..747c124539bc2 100644 --- a/tispark-overview.md +++ b/tispark-overview.md @@ -148,9 +148,9 @@ Add the following configuration in `spark-defaults.conf`: ``` spark.sql.extensions org.apache.spark.sql.TiExtensions -spark.tispark.pd.addresses ${your_pd_adress} +spark.tispark.pd.addresses ${your_pd_address} spark.sql.catalog.tidb_catalog org.apache.spark.sql.catalyst.catalog.TiCatalog -spark.sql.catalog.tidb_catalog.pd.addresses ${your_pd_adress} +spark.sql.catalog.tidb_catalog.pd.addresses ${your_pd_address} ``` Start spark-shell with the `--jars` option. diff --git a/tiup/tiup-mirror-reference.md b/tiup/tiup-mirror-reference.md index 4eb30cd2fcb14..520c6d50232d3 100644 --- a/tiup/tiup-mirror-reference.md +++ b/tiup/tiup-mirror-reference.md @@ -154,7 +154,7 @@ The index file's format is as follows: } }, "name": "{owner-name}", # The name of the owner. - "threshod": {N} # Indicates that the components owned by the owner must have at least N valid signatures. + "threshold": {N} # Indicates that the components owned by the owner must have at least N valid signatures. }, ... "{ownerN}": { # The ID of the Nth owner. diff --git a/tune-tikv-memory-performance.md b/tune-tikv-memory-performance.md index 4bd4e9d1ef00e..7c1c45fa60c81 100644 --- a/tune-tikv-memory-performance.md +++ b/tune-tikv-memory-performance.md @@ -109,7 +109,7 @@ job = "tikv" region-split-check-diff = "32MB" [coprocessor] -## If the size of a Region with the range of [a,e) is larger than the value of `region_max_size`, TiKV trys to split the Region to several Regions, for example, the Regions with the ranges of [a,b), [b,c), [c,d), and [d,e). +## If the size of a Region with the range of [a,e) is larger than the value of `region_max_size`, TiKV tries to split the Region to several Regions, for example, the Regions with the ranges of [a,b), [b,c), [c,d), and [d,e). ## After the Region split, the size of the split Regions is equal to the value of `region_split_size` (or slightly larger than the value of `region_split_size`). # region-max-size = "144MB" # region-split-size = "96MB" From bab58f338fc47cf5c9ca9ff45f6d130b5a5aaf86 Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 29 Apr 2024 14:32:58 +0800 Subject: [PATCH 5/8] add summary for releases notes (#17250) --- get-started-with-tidb-lightning.md | 1 + releases/release-1.0.1.md | 1 + releases/release-1.0.3.md | 1 + releases/release-1.0.7.md | 1 + releases/release-2.0-rc.1.md | 1 + releases/release-2.0.1.md | 1 + releases/release-2.0.6.md | 1 + releases/release-2.0.9.md | 1 + releases/release-2.1-ga.md | 1 + releases/release-2.1.2.md | 1 + releases/release-3.0-beta.md | 1 + releases/release-3.0-ga.md | 1 + releases/release-3.0.0-rc.2.md | 1 + releases/release-3.0.15.md | 1 + releases/release-3.0.5.md | 1 + releases/release-4.0.0-beta.md | 1 + releases/release-4.0.0-rc.2.md | 1 + releases/release-4.0.9.md | 1 + releases/release-5.0.0-rc.md | 1 + releases/release-5.0.3.md | 1 + releases/release-5.0.4.md | 1 + releases/release-5.0.6.md | 1 + releases/release-5.1.0.md | 1 + releases/release-5.2.4.md | 1 + releases/release-5.3.0.md | 1 + releases/release-5.3.1.md | 1 + releases/release-5.3.4.md | 1 + releases/release-5.4.0.md | 1 + releases/release-5.4.3.md | 1 + releases/release-6.0.0-dmr.md | 1 + releases/release-6.2.0.md | 1 + releases/release-6.3.0.md | 1 + releases/release-6.4.0.md | 1 + releases/release-notes.md | 1 + releases/release-pre-ga.md | 1 + releases/release-rc.3.md | 1 + 36 files changed, 36 insertions(+) diff --git a/get-started-with-tidb-lightning.md b/get-started-with-tidb-lightning.md index 9f929d33ab047..55605a5577bd6 100644 --- a/get-started-with-tidb-lightning.md +++ b/get-started-with-tidb-lightning.md @@ -1,6 +1,7 @@ --- title: Quick Start for TiDB Lightning aliases: ['/docs/dev/get-started-with-tidb-lightning/','/docs/dev/how-to/get-started/tidb-lightning/'] +summary: TiDB Lightning is a tool for importing MySQL data into a TiDB cluster. It is recommended for test and trial purposes only, not for production or development environments. The process involves preparing full backup data, deploying the TiDB cluster, installing TiDB Lightning, starting TiDB Lightning, and checking data integrity. For detailed features and usage, refer to the TiDB Lightning Overview. --- # Quick Start for TiDB Lightning diff --git a/releases/release-1.0.1.md b/releases/release-1.0.1.md index eec2891e4bba0..7ff5bc2e67a72 100644 --- a/releases/release-1.0.1.md +++ b/releases/release-1.0.1.md @@ -1,6 +1,7 @@ --- title: TiDB 1.0.1 Release Notes aliases: ['/docs/dev/releases/release-1.0.1/','/docs/dev/releases/101/'] +summary: TiDB 1.0.1 was released on November 1, 2017. Updates include support for canceling DDL Job, optimizing the `IN` expression, correcting the result type of the `Show` statement, supporting log slow query into a separate log file, and fixing bugs. TiKV now supports flow control with write bytes, reduces Raft allocation, increases coprocessor stack size to 10MB, and removes the useless log from the coprocessor. --- # TiDB 1.0.1 Release Notes diff --git a/releases/release-1.0.3.md b/releases/release-1.0.3.md index ac57f7eea1f47..29396839835c2 100644 --- a/releases/release-1.0.3.md +++ b/releases/release-1.0.3.md @@ -1,6 +1,7 @@ --- title: TiDB 1.0.3 Release Notes aliases: ['/docs/dev/releases/release-1.0.3/','/docs/dev/releases/103/'] +summary: TiDB 1.0.3 was released on November 28, 2017. Updates include performance optimization, new configuration options, and bug fixes. PD now supports adding more schedulers using API, and TiKV has fixed deadlock and leader value issues. To upgrade from 1.0.2 to 1.0.3, follow the rolling upgrade order of PD, TiKV, and TiDB. --- # TiDB 1.0.3 Release Notes diff --git a/releases/release-1.0.7.md b/releases/release-1.0.7.md index c8b9362d3fb67..6ba4487c8404a 100644 --- a/releases/release-1.0.7.md +++ b/releases/release-1.0.7.md @@ -1,6 +1,7 @@ --- title: TiDB 1.0.7 Release Notes aliases: ['/docs/dev/releases/release-1.0.7/','/docs/dev/releases/107/'] +summary: TiDB 1.0.7 is released with various updates including optimization of commands, fixing data race and resource leak issues, adding session variable for log query control, and improving stability of test results. PD and TiKV also have updates to fix scheduling loss issues, compatibility issues, and add support for table scan and remote mode in tikv-ctl. To upgrade from 1.0.6 to 1.0.7, follow the rolling upgrade order of PD, TiKV, and TiDB. --- # TiDB 1.0.7 Release Notes diff --git a/releases/release-2.0-rc.1.md b/releases/release-2.0-rc.1.md index aaff6e7e006df..de346b63c53c0 100644 --- a/releases/release-2.0-rc.1.md +++ b/releases/release-2.0-rc.1.md @@ -1,6 +1,7 @@ --- title: TiDB 2.0 RC1 Release Notes aliases: ['/docs/dev/releases/release-2.0-rc.1/','/docs/dev/releases/2rc1/'] +summary: TiDB 2.0 RC1, released on March 9, 2018, brings improvements in MySQL compatibility, SQL optimization, and stability. Key updates include memory usage limitation for SQL statements, Stream Aggregate operator support, configuration file validation, and HTTP API for configuration information. TiDB also enhances MySQL syntax compatibility, optimizer, and Boolean field length. PD sees logic and performance optimizations, while TiKV fixes gRPC call and adds gRPC APIs for metrics. Additionally, TiKV checks SSD usage, optimizes read performance, and improves metrics usage. --- # TiDB 2.0 RC1 Release Notes diff --git a/releases/release-2.0.1.md b/releases/release-2.0.1.md index 4270f3609892e..b8242d7eaca04 100644 --- a/releases/release-2.0.1.md +++ b/releases/release-2.0.1.md @@ -1,6 +1,7 @@ --- title: TiDB 2.0.1 Release Notes aliases: ['/docs/dev/releases/release-2.0.1/','/docs/dev/releases/201/'] +summary: TiDB 2.0.1 was released on May 16, 2018, with improvements in MySQL compatibility and system stability. Updates include real-time progress for 'Add Index', a new session variable for automatic statistics update, bug fixes, compatibility improvements, and behavior changes. PD added a new scheduler, optimized region balancing, and fixed various issues. TiKV fixed issues related to reading, thread calls, raftstore blocking, and split causing dirty read. Overall, the release focuses on enhancing performance, stability, and compatibility. --- # TiDB 2.0.1 Release Notes diff --git a/releases/release-2.0.6.md b/releases/release-2.0.6.md index 1c389a9079f5f..9dbb73443f984 100644 --- a/releases/release-2.0.6.md +++ b/releases/release-2.0.6.md @@ -1,6 +1,7 @@ --- title: TiDB 2.0.6 Release Notes aliases: ['/docs/dev/releases/release-2.0.6/','/docs/dev/releases/206/'] +summary: TiDB 2.0.6 was released on August 6, 2018, with improvements in system compatibility and stability. The release includes various improvements and bug fixes for TiDB and TiKV. Some notable improvements include reducing transaction conflicts, improving row count estimation accuracy, and adding a recover mechanism for panics during the execution of `ANALYZE TABLE`. Bug fixes address issues such as incompatible `DROP USER` statement behavior, OOM errors for `INSERT`/`LOAD DATA` statements, and incorrect results for prefix index and `DECIMAL` operations. TiKV also sees improvements in scheduler slots, rollback transaction records, and RocksDB log file management, along with a fix for a crash issue during data type conversion. --- # TiDB 2.0.6 Release Notes diff --git a/releases/release-2.0.9.md b/releases/release-2.0.9.md index 4caed728aa23a..0ce6b2a3b30bb 100644 --- a/releases/release-2.0.9.md +++ b/releases/release-2.0.9.md @@ -1,6 +1,7 @@ --- title: TiDB 2.0.9 Release Notes aliases: ['/docs/dev/releases/release-2.0.9/','/docs/dev/releases/209/'] +summary: TiDB 2.0.9 was released on November 19, 2018, with significant improvements in system compatibility and stability. The release includes fixes for various issues, such as empty statistics histogram, panic issue with UNION ALL statement, stack overflow issue, and support for specifying utf8mb4 character set. PD and TiKV also received fixes for issues related to server startup failure and interface limits. --- # TiDB 2.0.9 Release Notes diff --git a/releases/release-2.1-ga.md b/releases/release-2.1-ga.md index c070c48b6b6f5..db3b5629a9bf1 100644 --- a/releases/release-2.1-ga.md +++ b/releases/release-2.1-ga.md @@ -1,6 +1,7 @@ --- title: TiDB 2.1 GA Release Notes aliases: ['/docs/dev/releases/release-2.1-ga/','/docs/dev/releases/2.1ga/'] +summary: TiDB 2.1 GA was released on November 30, 2018, with significant improvements in stability, performance, compatibility, and usability. The release includes optimizations in SQL optimizer, SQL executor, statistics, expressions, server, DDL, compatibility, Placement Driver (PD), TiKV, and tools. It also introduces TiDB Lightning for fast full data import and supports new TiDB Binlog. However, TiDB 2.1 does not support downgrading to v2.0.x or earlier due to the adoption of the new storage engine. Additionally, parallel DDL is enabled in TiDB 2.1, so clusters with TiDB version earlier than 2.0.1 cannot upgrade to 2.1 using rolling update. If upgrading from TiDB 2.0.6 or earlier to TiDB 2.1, ongoing DDL operations may slow down the upgrading process. --- # TiDB 2.1 GA Release Notes diff --git a/releases/release-2.1.2.md b/releases/release-2.1.2.md index 810feece1c73b..5c1555236e222 100644 --- a/releases/release-2.1.2.md +++ b/releases/release-2.1.2.md @@ -1,6 +1,7 @@ --- title: TiDB 2.1.2 Release Notes aliases: ['/docs/dev/releases/release-2.1.2/','/docs/dev/releases/2.1.2/'] +summary: TiDB 2.1.2 and TiDB Ansible 2.1.2 were released on December 22, 2018. The release includes improvements in system compatibility and stability. Key updates include compatibility with TiDB Binlog of the Kafka version, improved exit mechanism during rolling updates, and fixes for various issues. PD and TiKV also received updates, such as fixing Region merge issues and support for configuration format in the unit of 'DAY'. Additionally, TiDB Lightning and TiDB Binlog were updated to support new features and eliminate bottlenecks. --- # TiDB 2.1.2 Release Notes diff --git a/releases/release-3.0-beta.md b/releases/release-3.0-beta.md index f17d7cce3b4cb..2aa292008af59 100644 --- a/releases/release-3.0-beta.md +++ b/releases/release-3.0-beta.md @@ -1,6 +1,7 @@ --- title: TiDB 3.0 Beta Release Notes aliases: ['/docs/dev/releases/release-3.0-beta/','/docs/dev/releases/3.0beta/'] +summary: TiDB 3.0 Beta, released on January 19, 2019, focuses on stability, SQL optimizer, statistics, and execution engine. New features include support for views, window functions, range partitioning, and hash partitioning. The SQL optimizer has been enhanced with various optimizations, including support for index join in transactions, constant propagation optimization, and support for subqueries in the DO statement. The SQL executor has also been optimized for better performance. Privilege management, server, compatibility, and DDL have all been improved. TiDB Lightning now supports batch import for a single table, while PD and TiKV have also received various enhancements and new features. --- # TiDB 3.0 Beta Release Notes diff --git a/releases/release-3.0-ga.md b/releases/release-3.0-ga.md index 54edd52772eed..39cdf517dcb98 100644 --- a/releases/release-3.0-ga.md +++ b/releases/release-3.0-ga.md @@ -1,6 +1,7 @@ --- title: TiDB 3.0 GA Release Notes aliases: ['/docs/dev/releases/release-3.0-ga/','/docs/dev/releases/3.0-ga/'] +summary: TiDB 3.0 GA was released on June 28, 2019, with improved stability, usability, and performance. New features include Window Functions, Views, partitioned tables, and the plugin framework. The SQL Optimizer has been optimized for better performance, and DDL now supports fast recovery of mistakenly deleted tables. TiKV now supports distributed GC, multi-thread Raftstore, and batch receiving and sending Raft messages. Tools like TiDB Lightning and TiDB Binlog have also been enhanced with new features and performance improvements. The TiDB Ansible has been upgraded to support deployment and operations for TiDB Lightning, and to optimize monitoring components. --- # TiDB 3.0 GA Release Notes diff --git a/releases/release-3.0.0-rc.2.md b/releases/release-3.0.0-rc.2.md index d8aa5737cebc4..27ec05bad3ce4 100644 --- a/releases/release-3.0.0-rc.2.md +++ b/releases/release-3.0.0-rc.2.md @@ -1,6 +1,7 @@ --- title: TiDB 3.0.0-rc.2 Release Notes aliases: ['/docs/dev/releases/release-3.0.0-rc.2/','/docs/dev/releases/3.0.0-rc.2/'] +summary: TiDB 3.0.0-rc.2 was released on May 28, 2019, with improvements in stability, usability, features, SQL optimizer, statistics, and execution engine. The release includes enhancements to the SQL optimizer, execution engine, server, DDL, PD, TiKV, and tools like TiDB Binlog and TiDB Lightning. Some notable improvements include support for Index Join in more scenarios, handling virtual columns properly, and adding a metric to track data replication downstream. --- # TiDB 3.0.0-rc.2 Release Notes diff --git a/releases/release-3.0.15.md b/releases/release-3.0.15.md index 87501606431cf..7c4f2b1023d46 100644 --- a/releases/release-3.0.15.md +++ b/releases/release-3.0.15.md @@ -1,6 +1,7 @@ --- title: TiDB 3.0.15 Release Notes aliases: ['/docs/dev/releases/release-3.0.15/'] +summary: TiDB 3.0.15 was released on June 5, 2020. New features include support for admin recover index and admin check index statements on partitioned tables, as well as optimization of memory allocation mechanism. Bug fixes address issues such as incorrect results in PointGet and inconsistent results between TiDB and MySQL when XOR operates on a floating-point number. TiKV fixes issues related to memory defragmentation and gRPC disconnection. --- # TiDB 3.0.15 Release Notes diff --git a/releases/release-3.0.5.md b/releases/release-3.0.5.md index 2a8474be3bf61..a20f90118e935 100644 --- a/releases/release-3.0.5.md +++ b/releases/release-3.0.5.md @@ -1,6 +1,7 @@ --- title: TiDB 3.0.5 Release Notes aliases: ['/docs/dev/releases/release-3.0.5/','/docs/dev/releases/3.0.5/'] +summary: TiDB 3.0.5 was released on October 25, 2019, with various improvements and bug fixes. The release includes enhancements to the SQL optimizer, SQL execution engine, server, DDL, monitor, TiKV, PD, TiDB Binlog, TiDB Lightning, and TiDB Ansible. Improvements include support for boundary checking on Window Functions, fixing issues with index join and outer join, and adding monitoring metrics for various operations. Additionally, TiKV received storage and performance optimizations, while PD saw improvements in storage precision and HTTP request handling. TiDB Ansible also received updates to monitoring metrics and configuration file simplification. --- # TiDB 3.0.5 Release Notes diff --git a/releases/release-4.0.0-beta.md b/releases/release-4.0.0-beta.md index faa45cc7ec4bc..2c2a10fcbc23e 100644 --- a/releases/release-4.0.0-beta.md +++ b/releases/release-4.0.0-beta.md @@ -1,6 +1,7 @@ --- title: TiDB 4.0 Beta Release Notes aliases: ['/docs/dev/releases/release-4.0.0-beta/','/docs/dev/releases/4.0.0-beta/'] +summary: TiDB version 4.0.0-beta and TiDB Ansible version 4.0.0-beta were released on January 17, 2020. The release includes various improvements such as increased accuracy in calculating the cost of Index Join, support for Table Locks, and optimization of the error code of SQL error messages. TiKV was also upgraded to RocksDB version 6.4.6 and now supports quick backup and restoration. PD now supports optimizing hotspot scheduling and adding Placement Rules feature. TiDB Lightning added a parameter to set the password of the downstream database, and TiDB Ansible now supports deploying and maintaining TiFlash. --- # TiDB 4.0 Beta Release Notes diff --git a/releases/release-4.0.0-rc.2.md b/releases/release-4.0.0-rc.2.md index 9cadff71ef25c..2bb086df641c0 100644 --- a/releases/release-4.0.0-rc.2.md +++ b/releases/release-4.0.0-rc.2.md @@ -1,6 +1,7 @@ --- title: TiDB 4.0 RC.2 Release Notes aliases: ['/docs/dev/releases/release-4.0.0-rc.2/'] +summary: TiDB 4.0 RC.2 was released on May 15, 2020. The release includes compatibility changes, important bug fixes, new features, and bug fixes for TiDB, TiKV, PD, TiFlash, and various tools. Some notable changes include the removal of the size limit for a single transaction when TiDB Binlog is enabled, support for the BACKUP and RESTORE commands, and the addition of encryption-related monitoring metrics in Grafana dashboard. Additionally, there are numerous bug fixes for issues such as wrong partition selection, incorrect index range building, and performance reduction. The release also introduces new features like support for the auto_random option in the CREATE TABLE statement and the ability to manage replication tasks using cdc cli. --- # TiDB 4.0 RC.2 Release Notes diff --git a/releases/release-4.0.9.md b/releases/release-4.0.9.md index 016dede57f686..9b62baa1d96a9 100644 --- a/releases/release-4.0.9.md +++ b/releases/release-4.0.9.md @@ -1,5 +1,6 @@ --- title: TiDB 4.0.9 Release Notes +summary: TiDB 4.0.9 was released on December 21, 2020. The release includes compatibility changes, new features, improvements, bug fixes, and updates to TiKV, TiDB Dashboard, PD, TiFlash, and various tools. Notable changes include the deprecation of the `enable-streaming` configuration item in TiDB, support for storing the latest data of the storage engine on multiple disks in TiFlash, and various bug fixes in TiDB and TiKV. --- # TiDB 4.0.9 Release Notes diff --git a/releases/release-5.0.0-rc.md b/releases/release-5.0.0-rc.md index befdfb08b4236..f8a7f8ff444a9 100644 --- a/releases/release-5.0.0-rc.md +++ b/releases/release-5.0.0-rc.md @@ -1,5 +1,6 @@ --- title: TiDB 5.0 RC Release Notes +summary: TiDB v5.0.0-rc is the predecessor version of TiDB v5.0. It includes new features like clustered index, async commit, reduced jitters, Raft Joint Consensus algorithm, optimized `EXPLAIN` features, invisible index, and improved reliability for enterprise data. It also supports desensitizing error messages and log files for security. Performance improvements include async commit, optimizer stability, and reduced performance jitter. It also enhances system availability during Region membership change. Additionally, it supports backup and restore to AWS S3 and Google Cloud GCS, data import/export, and optimized `EXPLAIN` features for troubleshooting SQL performance issues. Deployment and maintenance improvements include enhanced `mirror` command and easier installation process. --- # TiDB 5.0 RC Release Notes diff --git a/releases/release-5.0.3.md b/releases/release-5.0.3.md index eb29090c88965..038bd3dfc8bc4 100644 --- a/releases/release-5.0.3.md +++ b/releases/release-5.0.3.md @@ -1,5 +1,6 @@ --- title: TiDB 5.0.3 Release Notes +summary: TiDB 5.0.3 was released on July 2, 2021. The release includes compatibility changes, feature enhancements, improvements, bug fixes, and updates for TiDB, TiKV, PD, TiFlash, and Tools like TiCDC, Backup & Restore (BR), and TiDB Lightning. Some notable changes include support for pushing down operators and functions to TiFlash, memory consumption limits for TiCDC, and bug fixes for various issues in TiDB, TiKV, PD, and TiFlash. --- # TiDB 5.0.3 Release Notes diff --git a/releases/release-5.0.4.md b/releases/release-5.0.4.md index fc7ad9805dcb9..be37425e8d9c6 100644 --- a/releases/release-5.0.4.md +++ b/releases/release-5.0.4.md @@ -1,5 +1,6 @@ --- title: TiDB 5.0.4 Release Notes +summary: Compatibility changes include fixes for slow `SHOW VARIABLES` execution, default value change for `tidb_stmt_summary_max_stmt_count`, and bug fixes that may cause upgrade incompatibilities. Feature enhancements include support for setting `tidb_enforce_mpp=1` and dynamic TiCDC configurations. Improvements cover auto-analyze trigger, MPP query retry support, and stable result mode. Bug fixes address various issues in TiDB, TiKV, PD, TiFlash, and tools like Dumpling and TiCDC. --- # TiDB 5.0.4 Release Notes diff --git a/releases/release-5.0.6.md b/releases/release-5.0.6.md index 584e8fee26cd8..5f780a20fae3a 100644 --- a/releases/release-5.0.6.md +++ b/releases/release-5.0.6.md @@ -1,6 +1,7 @@ --- title: TiDB 5.0.6 Release Notes category: Releases +summary: TiDB 5.0.6 was released on December 31, 2021. The release includes compatibility changes, improvements, bug fixes, and updates to various tools such as TiCDC, TiKV, PD, TiDB Lightning, TiFlash, Backup & Restore (BR), and Dumpling. The changes include enhancements to error handling, performance improvements, bug fixes related to SQL statements, and various optimizations for different tools. --- # TiDB 5.0.6 Release Notes diff --git a/releases/release-5.1.0.md b/releases/release-5.1.0.md index 704cb272766f6..608bad96342f1 100644 --- a/releases/release-5.1.0.md +++ b/releases/release-5.1.0.md @@ -1,5 +1,6 @@ --- title: TiDB 5.1 Release Notes +summary: TiDB 5.1 introduces support for Common Table Expression, dynamic privilege feature, and Stale Read. It also includes new statistics type, Lock View feature, and TiKV write rate limiter. Compatibility changes include new system and configuration variables. Other improvements and bug fixes are also part of this release. --- # TiDB 5.1 Release Notes diff --git a/releases/release-5.2.4.md b/releases/release-5.2.4.md index 4064c23c9a7fe..a93247a1088c8 100644 --- a/releases/release-5.2.4.md +++ b/releases/release-5.2.4.md @@ -1,6 +1,7 @@ --- title: TiDB 5.2.4 Release Notes category: Releases +summary: Learn about the new features, compatibility changes, improvements, and bug fixes in TiDB 5.2.4. --- # TiDB 5.2.4 Release Notes diff --git a/releases/release-5.3.0.md b/releases/release-5.3.0.md index 9014dd337163e..0284ae17d7cf9 100644 --- a/releases/release-5.3.0.md +++ b/releases/release-5.3.0.md @@ -1,5 +1,6 @@ --- title: TiDB 5.3 Release Notes +summary: TiDB 5.3.0 introduces temporary tables, table attributes, and user privileges on TiDB Dashboard for improved performance and security. It also enhances TiDB Data Migration, supports parallel import using multiple TiDB Lightning instances, and continuous profiling for better observability. Compatibility changes and configuration file parameters have been modified. The release also includes new SQL features, security enhancements, stability improvements, and diagnostic efficiency. Additionally, bug fixes and improvements have been made to TiDB, TiKV, PD, TiFlash, and TiCDC. The cyclic replication feature between TiDB clusters has been removed. Telemetry now includes information about the usage of the TEMPORARY TABLE feature. --- # TiDB 5.3 Release Notes diff --git a/releases/release-5.3.1.md b/releases/release-5.3.1.md index 8f1a1da8e4974..849287628fb14 100644 --- a/releases/release-5.3.1.md +++ b/releases/release-5.3.1.md @@ -1,5 +1,6 @@ --- title: TiDB 5.3.1 Release Notes +summary: TiDB 5.3.1 was released on March 3, 2022. The release includes compatibility changes, improvements, and bug fixes for TiDB, TiKV, PD, TiCDC, TiFlash, Backup & Restore (BR), and TiDB Data Migration (DM). Some notable changes include optimizing user login mode mapping, reducing TiCDC recovery time, and fixing various bugs in TiDB, TiKV, PD, TiFlash, and tools like TiCDC and TiDB Lightning. These fixes address issues related to data import, user login, garbage collection, configuration parameters, and more. --- # TiDB 5.3.1 Release Notes diff --git a/releases/release-5.3.4.md b/releases/release-5.3.4.md index ca20e7ca1c7d3..5addade1af19b 100644 --- a/releases/release-5.3.4.md +++ b/releases/release-5.3.4.md @@ -1,5 +1,6 @@ --- title: TiDB 5.3.4 Release Notes +summary: TiDB 5.3.4 was released on November 24, 2022. The release includes improvements to TiKV and bug fixes for TiDB, PD, TiFlash, Dumpling, and TiCDC. Some of the key bug fixes include issues related to TLS certificate reloading, Region cache cleanup, wrong data writing, database-level privileges, and authentication failures. Other fixes address issues with logical operators, stream timeout, leader switchover, and data dumping. --- # TiDB 5.3.4 Release Notes diff --git a/releases/release-5.4.0.md b/releases/release-5.4.0.md index 396693e162c44..832ac4884d262 100644 --- a/releases/release-5.4.0.md +++ b/releases/release-5.4.0.md @@ -1,5 +1,6 @@ --- title: TiDB 5.4 Release Notes +summary: TiDB 5.4 introduces support for the GBK character set, Index Merge, reading stale data, persisting statistics configuration, and using Raft Engine as the log storage engine of TiKV. It also improves backup impact, supports Azure Blob storage, and enhances TiFlash and the MPP engine. Compatibility changes include new system variables and configuration file parameters. Other improvements cover SQL, security, performance, stability, high availability, data migration, diagnostic efficiency, and deployment. Bug fixes address issues in TiDB, TiKV, PD, TiFlash, BR, TiCDC, DM, TiDB Lightning, and TiDB Binlog. --- # TiDB 5.4 Release Notes diff --git a/releases/release-5.4.3.md b/releases/release-5.4.3.md index 01dd37d348833..f23687bd3ad19 100644 --- a/releases/release-5.4.3.md +++ b/releases/release-5.4.3.md @@ -1,5 +1,6 @@ --- title: TiDB 5.4.3 Release Notes +summary: TiDB 5.4.3 was released on October 13, 2022. The release includes various improvements and bug fixes for TiKV, Tools, TiCDC, TiFlash, PD, and other tools. Improvements include support for configuring RocksDB write stall settings, optimizing Scatter Region to batch mode, and reducing performance overhead in multi-Region scenarios. Bug fixes address issues such as incorrect output of `SHOW CREATE PLACEMENT POLICY`, DDL statements getting stuck after PD node replacement, and various issues causing incorrect results and errors in TiDB, TiKV, PD, TiFlash, and other tools. The release also provides workarounds and affected versions for specific issues. --- # TiDB 5.4.3 Release Notes diff --git a/releases/release-6.0.0-dmr.md b/releases/release-6.0.0-dmr.md index 826bed2523789..6417a8c20462e 100644 --- a/releases/release-6.0.0-dmr.md +++ b/releases/release-6.0.0-dmr.md @@ -1,5 +1,6 @@ --- title: TiDB 6.0.0 Release Notes +summary: Learn about the new features, compatibility changes, improvements, and bug fixes in TiDB 6.0.0. --- # TiDB 6.0.0 Release Notes diff --git a/releases/release-6.2.0.md b/releases/release-6.2.0.md index 37ca32dd9c3f4..468ecf8ebdac9 100644 --- a/releases/release-6.2.0.md +++ b/releases/release-6.2.0.md @@ -1,5 +1,6 @@ --- title: TiDB 6.2.0 Release Notes +summary: TiDB 6.2.0-DMR introduces new features like visual execution plans, monitoring page, and lock view. It also supports concurrent DDL operations and enhances the performance of aggregation operations. TiKV now supports automatic CPU usage tuning and detailed configuration information listing. TiFlash adds FastScan for data scanning and improves error handling. BR now supports continuous data validation and automatically identifies the region of Amazon S3 buckets. TiCDC supports filtering DDL and DML events. There are also compatibility changes, bug fixes, and improvements across various tools. --- # TiDB 6.2.0 Release Notes diff --git a/releases/release-6.3.0.md b/releases/release-6.3.0.md index 98ff3bcb881ad..ad4b13bd9faf8 100644 --- a/releases/release-6.3.0.md +++ b/releases/release-6.3.0.md @@ -1,5 +1,6 @@ --- title: TiDB 6.3.0 Release Notes +summary: TiDB 6.3.0-DMR, released on September 30, 2022, introduces new features and improvements, including encryption at rest using the SM4 algorithm in TiKV, authentication using the SM3 algorithm in TiDB, and support for JSON data type and functions. It also provides execution time metrics at a finer granularity, enhances output for slow logs and `TRACE` statements, and supports deadlock history information in TiDB Dashboard. Additionally, TiDB v6.3.0 introduces new system variables and configuration file parameters, and fixes various bugs and issues. The release also includes improvements in TiKV, PD, TiFlash, Backup & Restore (BR), TiCDC, TiDB Binlog, TiDB Data Migration (DM), and TiDB Lightning. --- # TiDB 6.3.0 Release Notes diff --git a/releases/release-6.4.0.md b/releases/release-6.4.0.md index f10166faea90c..b9f6eb353a396 100644 --- a/releases/release-6.4.0.md +++ b/releases/release-6.4.0.md @@ -1,5 +1,6 @@ --- title: TiDB 6.4.0 Release Notes +summary: TiDB 6.4.0-DMR introduces new features and improvements, including support for restoring a cluster to a specific point in time, compatibility with Linear Hash partitioning syntax, and a high-performance `AUTO_INCREMENT` mode. It also enhances fault recovery, memory usage control, and statistics collection. TiFlash now supports the SM4 algorithm for encryption at rest, and TiCDC supports replicating data to Kafka. The release also includes bug fixes and improvements across various tools and components. --- # TiDB 6.4.0 Release Notes diff --git a/releases/release-notes.md b/releases/release-notes.md index d8202299bee5b..1480319d7884d 100644 --- a/releases/release-notes.md +++ b/releases/release-notes.md @@ -1,6 +1,7 @@ --- title: Release Notes aliases: ['/docs/dev/releases/release-notes/','/docs/dev/releases/rn/'] +summary: TiDB has released multiple versions, including 8.0.0-DMR, 7.6.0-DMR, 7.5.1, 7.5.0, 7.4.0-DMR, 7.3.0-DMR, 7.2.0-DMR, 7.1.4, 7.1.3, 7.1.2, 7.1.1, 7.1.0, 7.0.0-DMR, 6.6.0-DMR, 6.5.9, 6.5.8, 6.5.7, 6.5.6, 6.5.5, 6.5.4, 6.5.3, 6.5.2, 6.5.1, 6.5.0, 6.4.0-DMR, 6.3.0-DMR, 6.2.0-DMR, 6.1.7, 6.1.6, 6.1.5, 6.1.4, 6.1.3, 6.1.2, 6.1.1, 6.1.0, 6.0.0-DMR, 5.4.3, 5.4.2, 5.4.1, 5.4.0, 5.3.4, 5.3.3, 5.3.2, 5.3.1, 5.3.0, 5.2.4, 5.2.3, 5.2.2, 5.2.1, 5.2.0, 5.1.5, 5.1.4, 5.1.3, 5.1.2, 5.1.1, 5.1.0, 5.0.6, 5.0.5, 5.0.4, 5.0.3, 5.0.2, 5.0.1, 5.0.0, 5.0.0-rc, 4.0.16, 4.0.15, 4.0.14, 4.0.13, 4.0.12, 4.0.11, 4.0.10, 4.0.9, 4.0.8, 4.0.7, 4.0.6, 4.0.5, 4.0.4, 4.0.3, 4.0.2, 4.0.1, 4.0.0, 4.0.0-rc.2, 4.0.0-rc.1, 4.0.0-rc, 4.0.0-beta.2, 4.0.0-beta.1, 4.0.0-beta, 3.1.2, 3.1.1, 3.1.0, 3.1.0-rc, 3.1.0-beta.2, 3.1.0-beta.1, 3.1.0-beta, 3.0.20, 3.0.19, 3.0.18, 3.0.17, 3.0.16, 3.0.15, 3.0.14, 3.0.13, 3.0.12, 3.0.11, 3.0.10, 3.0.9, 3.0.8, 3.0.7, 3.0.6, 3.0.5, 3.0.4, 3.0.3, 3.0.2, 3.0.1, 3.0.0, 3.0.0-rc.3, 3.0.0-rc.2, 3.0.0-rc.1, 3.0.0-beta.1, 3.0.0-beta, 2.1.19, 2.1.18, 2.1.17, 2.1.16, 2.1.15, 2.1.14, 2.1.13, 2.1.12, 2.1.11, 2.1.10, 2.1.9, 2.1.8, 2.1.7, 2.1.6, 2.1.5, 2.1.4, 2.1.3, 2.1.2, 2.1.1, 2.1.0, 2.1.0-rc.5, 2.1.0-rc.4, 2.1.0-rc.3, 2.1.0-rc.2, 2.1.0-rc.1, 2.1.0-beta, 2.0.11, 2.0.10, 2.0.9, 2.0.8, 2.0.7, 2.0.6, 2.0.5, 2.0.4, 2.0.3, 2.0.2, 2.0.1, 2.0.0, 2.0.0-rc.5, 2.0.0-rc.4, 2.0.0-rc.3, 2.0.0-rc.1, 1.1.0-beta, 1.1.0-alpha, 1.0.8, 1.0.7, 1.0.6, 1.0.5, 1.0.4, 1.0.3, 1.0.2, 1.0.1, 1.0.0, Pre-GA, rc4, rc3, rc2, rc1. --- # TiDB Release Notes diff --git a/releases/release-pre-ga.md b/releases/release-pre-ga.md index effafb4181013..c41a06b28855e 100644 --- a/releases/release-pre-ga.md +++ b/releases/release-pre-ga.md @@ -1,6 +1,7 @@ --- title: Pre-GA release notes aliases: ['/docs/dev/releases/release-pre-ga/','/docs/dev/releases/prega/'] +summary: TiDB Pre-GA release on August 30, 2017, focuses on MySQL compatibility, SQL optimization, stability, and performance. TiDB introduces SQL query optimizer enhancements, MySQL compatibility, JSON type support, and memory consumption reduction. Placement Driver (PD) now supports manual leader change, while TiKV uses dedicated Rocksdb for Raft log storage and improves performance. TiDB Connector for Spark Beta Release implements predicates pushdown, aggregation pushdown, and range pruning, capable of running TPC+H queries. --- # Pre-GA Release Notes diff --git a/releases/release-rc.3.md b/releases/release-rc.3.md index c63980ff53e5d..f8d49e8efb291 100644 --- a/releases/release-rc.3.md +++ b/releases/release-rc.3.md @@ -1,6 +1,7 @@ --- title: TiDB RC3 Release Notes aliases: ['/docs/dev/releases/release-rc.3/','/docs/dev/releases/rc3/'] +summary: TiDB RC3, released on June 16, 2017, focuses on MySQL compatibility, SQL optimization, stability, and performance. Highlights include refined privilege management, accelerated DDL, optimized load balancing, and open-sourced TiDB Ansible for easy cluster management. Detailed updates for TiDB, Placement Driver (PD), and TiKV include improved SQL query optimization, complete privilege management, support for HTTP API, system variables for query concurrency control, and more efficient data balance. PD supports gRPC, disaster recovery toolkit, and hot Region scheduling. TiKV supports gRPC, SST format snapshot, memory leak detection, and improved data importing speed. Overall, the release enhances performance, stability, and management capabilities. --- # TiDB RC3 Release Notes From 79c2252b4041e2152c74a30a99ce9b81792074d6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dani=C3=ABl=20van=20Eeden?= Date: Tue, 30 Apr 2024 09:23:33 +0200 Subject: [PATCH 6/8] Link CTE docs (#17354) --- develop/dev-guide-use-common-table-expression.md | 2 +- sql-statements/sql-statement-with.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/develop/dev-guide-use-common-table-expression.md b/develop/dev-guide-use-common-table-expression.md index f9b89a6da9f31..71854c515692f 100644 --- a/develop/dev-guide-use-common-table-expression.md +++ b/develop/dev-guide-use-common-table-expression.md @@ -15,7 +15,7 @@ Since TiDB v5.1, TiDB supports the CTE of the ANSI SQL99 standard and recursion. ## Basic use -A Common Table Expression (CTE) is a temporary result set that can be referred to multiple times within a SQL statement to improve the statement readability and execution efficiency. You can apply the `WITH` statement to use CTE. +A Common Table Expression (CTE) is a temporary result set that can be referred to multiple times within a SQL statement to improve the statement readability and execution efficiency. You can apply the [`WITH`](/sql-statements/sql-statement-with.md) statement to use CTE. Common Table Expressions can be classified into two types: non-recursive CTE and recursive CTE. diff --git a/sql-statements/sql-statement-with.md b/sql-statements/sql-statement-with.md index 0f97379141c9b..2f0130337a95d 100644 --- a/sql-statements/sql-statement-with.md +++ b/sql-statements/sql-statement-with.md @@ -84,6 +84,7 @@ WITH RECURSIVE cte(a) AS (SELECT 1 UNION SELECT a+1 FROM cte WHERE a < 5) SELECT ## See also +* [Developer Guide: Common Table Expression](/develop/dev-guide-use-common-table-expression.md) * [SELECT](/sql-statements/sql-statement-select.md) * [INSERT](/sql-statements/sql-statement-insert.md) * [DELETE](/sql-statements/sql-statement-delete.md) From f3ce2a9ba506250cae6f176f112e4a77c2b70626 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Tue, 30 Apr 2024 16:37:03 +0800 Subject: [PATCH 7/8] grafana: explained metrics in log backup row (#17387) --- grafana-tikv-dashboard.md | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/grafana-tikv-dashboard.md b/grafana-tikv-dashboard.md index d39f714025f05..18579e8bd2fcf 100644 --- a/grafana-tikv-dashboard.md +++ b/grafana-tikv-dashboard.md @@ -459,6 +459,41 @@ This section provides a detailed description of these key metrics on the **TiKV- - Encrypt/decrypt data nanos: The histogram of duration on encrypting/decrypting data each time - Read/write encryption meta duration: The time consumed for reading/writing encryption meta files +### Log Backup + +- Handle Event Rate: The speed of handling write events +- Initial Scan Generate Event Throughput: Incremental scanning speed when generating a new listener stream +- Abnormal Checkpoint TS Lag: The lag of the current checkpoint TS to the present time for each task +- Memory Of Events: An estimated amount of memory occupied by temporary data generated by incremental scanning +- Observed Region Count: The number of Regions currently listened to +- Errors: The number and type of retryable and non-fatal errors +- Fatal Errors: The number and type of fatal errors. Usually, fatal errors cause the task to be paused. +- Checkpoint TS of Tasks: Checkpoint TS for each task +- Flush Duration: The heat map of how long it takes for moving cached data to external storage +- Initial Scanning Duration: The heat map of how long it takes for incremental scanning when creating a new listening stream +- Convert Raft Event Duration: The heat map of how long it takes to transform a Raft log entry into backup data after creating a listening stream +- Command Batch Size: The batch size (within a single Raft group) of the listening Raft command +- Save to Temp File Duration: The heat map of how long it takes to temporarily store a batch of backup data (spanning several tasks) into the temporary file area +- Write to Temp File Duration: The heat map of how long it takes to temporarily store a batch of backup data (from a particular task) into the temporary file area +- System Write Call Duration: The heat map of how long it takes to write a batch of backup data (from a Region) to a temporary file +- Internal Message Type: The type of messages received by the actor responsible for the log backup within TiKV +- Internal Message Handling Duration (P90|P99): The speed of consuming and processing each type of messages +- Initial Scan RocksDB Throughput: The read traffic generated by RocksDB internal logging during incremental scanning +- Initial Scan RocksDB Operation: The number of individual operations logged internally by RocksDB during incremental scanning +- Initial Scanning Trigger Reason: The reason for triggering incremental scanning +- Region Checkpoint Key Putting: The number of checkpoint operations logged to the PD + +> **Note:** +> +> The following monitoring metrics all use TiDB nodes as their data source, but they have some impact on the log backup process. Therefore, they are placed in the **TiKV Details** dashboard for ease of reference. TiKV actively pushes progress most of the time, but it is normal for some of the following monitoring metrics to occasionally not have sampled data. + +- Request Checkpoint Batch Size: The request batch size when the log backup coordinator requests checkpoint information for each TiKV +- Tick Duration \[P99|P90\]: The time taken by the tick inside the coordinator +- Region Checkpoint Failure Reason: The reason why a Region checkpoint cannot advance within the coordinator +- Request Result: The record of the coordinator's success or failure in advancing the Region checkpoint +- Get Region Operation Count: The number of times the coordinator requests Region information from the PD +- Try Advance Trigger Time: The time taken for the coordinator to attempt to advance the checkpoint + ### Explanation of Common Parameters #### gRPC Message Type From 791b808055d7c874700a389676adecbdd32c3bbd Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Tue, 30 Apr 2024 16:53:33 +0800 Subject: [PATCH 8/8] fix pd cert allowed cn description (#17397) --- enable-tls-between-components.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/enable-tls-between-components.md b/enable-tls-between-components.md index af73a7887ddfc..3f48566c2da66 100644 --- a/enable-tls-between-components.md +++ b/enable-tls-between-components.md @@ -158,16 +158,17 @@ The Common Name is used for caller verification. In general, the callee needs to To verify component caller's identity, you need to mark the certificate user identity using `Common Name` when generating the certificate, and to check the caller's identity by configuring the `Common Name` list for the callee. +> **Note:** +> +> Currently the `cert-allowed-cn` configuration item of the PD can only be set to one value. Therefore, the `commonName` of all authentication objects must be set to the same value. + - TiDB Configure in the configuration file or command-line arguments: ```toml [security] - cluster-verify-cn = [ - "TiDB-Server", - "TiKV-Control", - ] + cluster-verify-cn = ["TiDB"] ``` - TiKV @@ -176,9 +177,7 @@ To verify component caller's identity, you need to mark the certificate user ide ```toml [security] - cert-allowed-cn = [ - "TiDB-Server", "PD-Server", "TiKV-Control", "RawKvClient1", - ] + cert-allowed-cn = ["TiDB"] ``` - PD @@ -187,7 +186,7 @@ To verify component caller's identity, you need to mark the certificate user ide ```toml [security] - cert-allowed-cn = ["TiKV-Server", "TiDB-Server", "PD-Control"] + cert-allowed-cn = ["TiDB"] ``` - TiFlash (New in v4.0.5) @@ -196,14 +195,14 @@ To verify component caller's identity, you need to mark the certificate user ide ```toml [security] - cert_allowed_cn = ["TiKV-Server", "TiDB-Server"] + cert_allowed_cn = ["TiDB"] ``` Configure in the `tiflash-learner.toml` file: ```toml [security] - cert-allowed-cn = ["PD-Server", "TiKV-Server", "TiFlash-Server"] + cert-allowed-cn = ["TiDB"] ``` ## Reload certificates