Skip to content

Commit

Permalink
Update system-variables.md (pingcap#16069) (pingcap#16078)
Browse files Browse the repository at this point in the history
  • Loading branch information
ti-chi-bot authored Jan 10, 2024
1 parent 641751d commit a93a13b
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions system-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -3190,7 +3190,7 @@ For a system upgraded to v5.0 from an earlier version, if you have not modified
- Default value: `32`
- Range: `[1, 32]`
- Unit: Rows
- This variable is used to set the number of rows for the initial chunk during the execution process.
- This variable is used to set the number of rows for the initial chunk during the execution process. The number of rows for a chunk directly affects the amount of memory required for a single query. You can roughly estimate the memory needed for a single chunk by considering the total width of all columns in the query and the number of rows for the chunk. Combining this with the concurrency of the executor, you can make a rough estimation of the total memory required for a single query. It is recommended that the total memory for a single chunk does not exceed 16 MiB.
### tidb_isolation_read_engines <span class="version-mark">New in v4.0</span>
Expand Down Expand Up @@ -3419,7 +3419,7 @@ For a system upgraded to v5.0 from an earlier version, if you have not modified
- Default value: `1024`
- Range: `[32, 2147483647]`
- Unit: Rows
- This variable is used to set the maximum number of rows in a chunk during the execution process. Setting to too large of a value may cause cache locality issues.
- This variable is used to set the maximum number of rows in a chunk during the execution process. Setting to too large of a value may cause cache locality issues. The recommended value for this variable is no larger than 65536. The number of rows for a chunk directly affects the amount of memory required for a single query. You can roughly estimate the memory needed for a single chunk by considering the total width of all columns in the query and the number of rows for the chunk. Combining this with the concurrency of the executor, you can make a rough estimation of the total memory required for a single query. It is recommended that the total memory for a single chunk does not exceed 16 MiB. When the query involves a large amount of data and a single chunk is insufficient to handle all the data, TiDB processes it multiple times, doubling the chunk size with each processing iteration, starting from [`tidb_init_chunk_size`](#tidb_init_chunk_size) until the chunk size reaches the value of `tidb_max_chunk_size`.
### tidb_max_delta_schema_count <span class="version-mark">New in v2.1.18 and v3.0.5</span>
Expand Down
2 changes: 1 addition & 1 deletion transaction-isolation-levels.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Starting from v6.0.0, TiDB supports using the [`tidb_rc_read_check_ts`](/system-
- If TiDB does not encounter any data update during the read process, it returns the result to the client and the `SELECT` statement is successfully executed.
- If TiDB encounters data update during the read process:
- If TiDB has not yet sent the result to the client, TiDB tries to acquire a new timestamp and retry this statement.
- If TiDB has already sent partial data to the client, TiDB reports an error to the client. The amount of data sent to the client each time is controlled by `tidb_init_chunk_size` and `tidb_max_chunk_size`.
- If TiDB has already sent partial data to the client, TiDB reports an error to the client. The amount of data sent to the client each time is controlled by [`tidb_init_chunk_size`](/system-variables.md#tidb_init_chunk_size) and [`tidb_max_chunk_size`](/system-variables.md#tidb_max_chunk_size).

In scenarios where the `READ-COMMITTED` isolation level is used, the `SELECT` statements are many, and read-write conflicts are rare, enabling this variable can avoid the latency and cost of getting the global timestamp.

Expand Down

0 comments on commit a93a13b

Please sign in to comment.