Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
qiancai committed Apr 30, 2024
2 parents ce8650d + 791b808 commit fd3cbea
Show file tree
Hide file tree
Showing 116 changed files with 185 additions and 111 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Please check out these templates before you submit a pull request:
We use separate branches to maintain different versions of TiDB documentation.

- The [documentation under development](https://docs.pingcap.com/tidb/dev) is maintained in the `master` branch.
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<verion>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<version>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
- The [archived documentation](https://docs-archive.pingcap.com/) is no longer maintained and does not receive any further updates.

### Use cherry-pick labels
Expand Down
3 changes: 3 additions & 0 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -526,10 +526,13 @@
- [Target Database Requirements](/tidb-lightning/tidb-lightning-requirements.md)
- Data Sources
- [Data Match Rules](/tidb-lightning/tidb-lightning-data-source.md)
- [Rename databases and tables](/tidb-lightning/tidb-lightning-data-source.md#rename-databases-and-tables)
- [CSV](/tidb-lightning/tidb-lightning-data-source.md#csv)
- [SQL](/tidb-lightning/tidb-lightning-data-source.md#sql)
- [Parquet](/tidb-lightning/tidb-lightning-data-source.md#parquet)
- [Compressed files](/tidb-lightning/tidb-lightning-data-source.md#compressed-files)
- [Customized File](/tidb-lightning/tidb-lightning-data-source.md#match-customized-files)
- [Import data from Amazon S3](/tidb-lightning/tidb-lightning-data-source.md#import-data-from-amazon-s3)
- Physical Import Mode
- [Requirements and Limitations](/tidb-lightning/tidb-lightning-physical-import-mode.md)
- [Use Physical Import Mode](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md)
Expand Down
2 changes: 1 addition & 1 deletion benchmark/benchmark-tidb-using-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ server_configs:
log.level: "error"
```
It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documetnation about what the SQL plan cache does and how to monitor it.
It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documentation about what the SQL plan cache does and how to monitor it.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion best-practices-on-public-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ To reduce the number of Regions and alleviate the heartbeat overhead on the syst

## After tuning

After the tunning, the following effects can be observed:
After the tuning, the following effects can be observed:

- The TSO requests per second are decreased to 64,800.
- The CPU utilization is significantly reduced from approximately 4,600% to 1,400%.
Expand Down
2 changes: 1 addition & 1 deletion check-before-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ To check whether the NTP service is installed and whether it synchronizes with t
Unable to talk to NTP daemon. Is it running?
```

3. Run the `chronyc tracking` command to check wheter the Chrony service synchronizes with the NTP server.
3. Run the `chronyc tracking` command to check whether the Chrony service synchronizes with the NTP server.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion configure-memory-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Currently, the memory limit set by `tidb_server_memory_limit` **DOES NOT** termi
>
> + During the startup process, TiDB does not guarantee that the [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) limit is enforced. If the free memory of the operating system is insufficient, TiDB might still encounter OOM. You need to ensure that the TiDB instance has enough available memory.
> + In the process of memory control, the total memory usage of TiDB might slightly exceed the limit set by `tidb_server_memory_limit`.
> + Since v6.5.0, the configruation item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.
> + Since v6.5.0, the configuration item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.
When the memory usage of a tidb-server instance reaches a certain proportion of the total memory (the proportion is controlled by the system variable [`tidb_server_memory_limit_gc_trigger`](/system-variables.md#tidb_server_memory_limit_gc_trigger-new-in-v640)), tidb-server will try to trigger a Golang GC to relieve memory stress. To avoid frequent GCs that cause performance issues due to the instance memory fluctuating around the threshold, this GC method will trigger GC at most once every minute.

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-session-sso.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ First, create an Okta Application Integration to integrate SSO.

![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png)

4. In the poped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.
4. In the popped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.

5. Choose **Single-Page Application** in **Application Type**.

Expand Down
2 changes: 1 addition & 1 deletion ddl-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ absent -> delete only -> write only -> write reorg -> public
For users, the newly created index is unavailable before the `public` state.

<SimpleTab>
<div label="Online DDL asychronous change before TiDB v6.2.0">
<div label="Online DDL asynchronous change before TiDB v6.2.0">

Before v6.2.0, the process of handling asynchronous schema changes in the TiDB SQL layer is as follows:

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-use-common-table-expression.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Since TiDB v5.1, TiDB supports the CTE of the ANSI SQL99 standard and recursion.

## Basic use

A Common Table Expression (CTE) is a temporary result set that can be referred to multiple times within a SQL statement to improve the statement readability and execution efficiency. You can apply the `WITH` statement to use CTE.
A Common Table Expression (CTE) is a temporary result set that can be referred to multiple times within a SQL statement to improve the statement readability and execution efficiency. You can apply the [`WITH`](/sql-statements/sql-statement-with.md) statement to use CTE.

Common Table Expressions can be classified into two types: non-recursive CTE and recursive CTE.

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-enable-tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ This section introduces how to enable encrypted data transmission between DM com

### Enable encrypted data transmission for downstream TiDB

1. Configure the downstream TiDB to use encrypted connections. For detailed operatons, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).
1. Configure the downstream TiDB to use encrypted connections. For detailed operations, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).

2. Set the TiDB client certificate in the task configuration file:

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ To solve this issue, you are recommended to maintain DM clusters using TiUP. In

## Why DM-master cannot be connected when I use dmctl to execute commands?

When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But afer checking the network connection using commands like `telnet <master-addr>`, no exception is found.
When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But after checking the network connection using commands like `telnet <master-addr>`, no exception is found.

In this case, you can check the environment variable `https_proxy` (note that it is **https**). If this variable is configured, dmctl automatically connects the host and port specified by `https_proxy`. If the host does not have a corresponding `proxy` forwarding service, the connection fails.

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-open-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -1346,7 +1346,7 @@ curl -X 'GET' \
"name": "string",
"source_name": "string",
"worker_name": "string",
"stage": "runing",
"stage": "running",
"unit": "sync",
"unresolved_ddl_lock_id": "string",
"load_status": {
Expand Down
2 changes: 1 addition & 1 deletion dm/dm-table-routing.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ To migrate the upstream instances to the downstream `test`.`t`, you must create

Assuming in the scenario of sharded schemas and tables, you want to migrate the `test_{1,2,3...}`.`t_{1,2,3...}` tables in two upstream MySQL instances to the `test`.`t` table in the downstream TiDB instance. At the same time, you want to extract the source information of the sharded tables and write it to the downstream merged table.

To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addtion, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:
To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addition, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:

- `extract-table`: For a sharded table matching `schema-pattern` and `table-pattern`, DM extracts the sharded table name by using `table-regexp` and writes the name suffix without the `t_` part to `target-column` of the merged table, that is, the `c_table` column.
- `extract-schema`: For a sharded schema matching `schema-pattern` and `table-pattern`, DM extracts the sharded schema name by using `schema-regexp` and writes the name suffix without the `test_` part to `target-column` of the merged table, that is, the `c_schema` column.
Expand Down
2 changes: 1 addition & 1 deletion dm/monitor-a-dm-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The following metrics show only when `task-mode` is in the `incremental` or `all
| total sqls jobs | The number of newly added jobs per unit of time | N/A | N/A |
| finished sqls jobs | The number of finished jobs per unit of time | N/A | N/A |
| statement execution latency | The duration that the binlog replication unit executes the statement to the downstream (in seconds) | N/A | N/A |
| add job duration | The duration tht the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
| add job duration | The duration that the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
| DML conflict detect duration | The duration that the binlog replication unit detects the conflict in DML (in seconds) | N/A | N/A |
| skipped event duration | The duration that the binlog replication unit skips a binlog event (in seconds) | N/A | N/A |
| unsynced tables | The number of tables that have not received the shard DDL statement in the current subtask | N/A | N/A |
Expand Down
2 changes: 1 addition & 1 deletion dm/quick-start-create-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The returned results are as follows:
After creating a data source, you can use the following command to query the data source:
- If you konw the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:
- If you know the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:
{{< copyable "shell-regular" >}}
Expand Down
19 changes: 9 additions & 10 deletions enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,16 +158,17 @@ The Common Name is used for caller verification. In general, the callee needs to

To verify component caller's identity, you need to mark the certificate user identity using `Common Name` when generating the certificate, and to check the caller's identity by configuring the `Common Name` list for the callee.

> **Note:**
>
> Currently the `cert-allowed-cn` configuration item of the PD can only be set to one value. Therefore, the `commonName` of all authentication objects must be set to the same value.

- TiDB

Configure in the configuration file or command-line arguments:

```toml
[security]
cluster-verify-cn = [
"TiDB-Server",
"TiKV-Control",
]
cluster-verify-cn = ["TiDB"]
```

- TiKV
Expand All @@ -176,9 +177,7 @@ To verify component caller's identity, you need to mark the certificate user ide

```toml
[security]
cert-allowed-cn = [
"TiDB-Server", "PD-Server", "TiKV-Control", "RawKvClient1",
]
cert-allowed-cn = ["TiDB"]
```

- PD
Expand All @@ -187,7 +186,7 @@ To verify component caller's identity, you need to mark the certificate user ide

```toml
[security]
cert-allowed-cn = ["TiKV-Server", "TiDB-Server", "PD-Control"]
cert-allowed-cn = ["TiDB"]
```

- TiFlash (New in v4.0.5)
Expand All @@ -196,14 +195,14 @@ To verify component caller's identity, you need to mark the certificate user ide

```toml
[security]
cert_allowed_cn = ["TiKV-Server", "TiDB-Server"]
cert_allowed_cn = ["TiDB"]
```

Configure in the `tiflash-learner.toml` file:

```toml
[security]
cert-allowed-cn = ["PD-Server", "TiKV-Server", "TiFlash-Server"]
cert-allowed-cn = ["TiDB"]
```

## Reload certificates
Expand Down
2 changes: 1 addition & 1 deletion encryption-at-rest.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ The encryption algorithm currently supported by TiFlash is consistent with that

The same master key can be shared by multiple instances of TiFlash, and can also be shared among TiFlash and TiKV. The recommended way to provide a master key in production is via AWS KMS. Alternatively, if using custom key is desired, supplying the master key via file is also supported. The specific method to generate master key and the format of the master key are the same as TiKV.

TiFlash uses the current data key to encrypt all data placed on the disk, including data files, Schmea files, and temporary data files generated during calculations. Data keys are automatically rotated by TiFlash every week by default, and the period is configurable. On key rotation, TiFlash does not rewrite all existing files to replace the key, but background compaction tasks are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiFlash keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads.
TiFlash uses the current data key to encrypt all data placed on the disk, including data files, Schema files, and temporary data files generated during calculations. Data keys are automatically rotated by TiFlash every week by default, and the period is configurable. On key rotation, TiFlash does not rewrite all existing files to replace the key, but background compaction tasks are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiFlash keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads.

### Key creation

Expand Down
2 changes: 1 addition & 1 deletion explain-index-merge.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,6 @@ When using the intersection-type index merge to access tables, the optimizer can
>
> - If the optimizer can choose the single index scan method (other than full table scan) for a query plan, the optimizer will not automatically use index merge. For the optimizer to use index merge, you need to use the optimizer hint.
>
> - Index Merge is not supported in [tempoaray tables](/temporary-tables.md) for now.
> - Index Merge is not supported in [temporary tables](/temporary-tables.md) for now.
>
> - The intersection-type index merge will not automatically be selected by the optimizer. You must specify the **table name and index name** using the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint for it to be selected.
2 changes: 1 addition & 1 deletion faq/manage-cluster-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ TiDB provides a few features and [tools](/ecosystem-tool-user-guide.md), with wh

The TiDB community is highly active. The engineers have been keeping optimizing features and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Release Timeline](/releases/release-timeline.md).

It is recommeneded to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:
It is recommended to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:

- `select tidb_version()`
- `tidb-server -V`
Expand Down
2 changes: 1 addition & 1 deletion faq/migration-tidb-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see

Currently, it is recommended to use OGG.

### Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`
### Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`

In Sqoop, `--batch` means committing 100 `statement`s in each batch, but by default each `statement` contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.

Expand Down
2 changes: 1 addition & 1 deletion faq/sql-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ TiDB supports modifying the [`sql_mode`](/system-variables.md#sql_mode) system v
- Changes to [`GLOBAL`](/sql-statements/sql-statement-set-variable.md) scoped variables propagate to the rest servers of the cluster and persist across restarts. This means that you do not need to change the `sql_mode` value on each TiDB server.
- Changes to `SESSION` scoped variables only affect the current client session. After restarting a server, the changes are lost.

## Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches
## Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches

In Sqoop, `--batch` means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.

Expand Down
2 changes: 1 addition & 1 deletion functions-and-operators/precision-math.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ DECIMAL columns do not store a leading `+` character or `-` character or leading

DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point.

For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB souce code.
For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB source code.

## Expression handling

Expand Down
Loading

0 comments on commit fd3cbea

Please sign in to comment.