diff --git a/faq/faq-overview.md b/faq/faq-overview.md index b1f33bf1ad5c3..242079f70da94 100644 --- a/faq/faq-overview.md +++ b/faq/faq-overview.md @@ -7,15 +7,69 @@ summary: Summarizes frequently asked questions (FAQs) about TiDB. This document summarizes frequently asked questions (FAQs) about TiDB. -| Category | Related documents | -| :------- | :------------------- | -| TiDB architecture and principles | [TiDB Architecture FAQs](/faq/tidb-faq.md) | -| Deployment | | -| Data migration | | -| Data backup and restore | [Backup & Restore FAQs](/faq/backup-and-restore-faq.md) | -| SQL operations | [SQL FAQs](/faq/sql-faq.md) | -| Cluster upgrade | [TiDB Upgrade FAQs](/faq/upgrade-faq.md) | -| Cluster management | [Cluster Management FAQs](/faq/manage-cluster-faq.md) | -| Monitor and alert | | -| High availability and high reliability | | -| Common error codes | [Error Codes and Troubleshooting](/error-codes.md) | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CategoryRelated documents
TiDB architecture and principlesTiDB Architecture FAQs
Deployment
Data migration + +
Data backup and restoreBackup & Restore FAQs
SQL operationsSQL FAQs
Cluster upgradeTiDB Upgrade FAQs
Cluster managementCluster Management FAQs
Monitor and alert
High availability and high reliability
Common error codesError Codes and Troubleshooting
\ No newline at end of file diff --git a/hardware-and-software-requirements.md b/hardware-and-software-requirements.md index 15d824567af30..4382a6541484b 100644 --- a/hardware-and-software-requirements.md +++ b/hardware-and-software-requirements.md @@ -18,22 +18,72 @@ As an open-source distributed SQL database with high performance, TiDB can be de ## OS and platform requirements -| Operating systems | Supported CPU architectures | -| :--- | :--- | -| Red Hat Enterprise Linux 8.4 or a later 8.x version | | -| | | -| Amazon Linux 2 | | -| Kylin Euler V10 SP1/SP2 | | -| UOS V20 | | -| openEuler 22.03 LTS SP1 | x86_64 | -| macOS 12 (Monterey) or later | | -| Oracle Enterprise Linux 7.3 or a later 7.x version | x86_64 | -| Ubuntu LTS 18.04 or later | x86_64 | -| CentOS 8 Stream | | -| Debian 9 (Stretch) or later | x86_64 | -| Fedora 35 or later | x86_64 | -| openSUSE Leap later than v15.3 (not including Tumbleweed) | x86_64 | -| SUSE Linux Enterprise Server 15 | x86_64 | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operating systemsSupported CPU architectures
Red Hat Enterprise Linux 8.4 or a later 8.x version
  • x86_64
  • ARM 64
  • Red Hat Enterprise Linux 7.3 or a later 7.x version
  • CentOS 7.3 or a later 7.x version
  • x86_64
  • ARM 64
Amazon Linux 2
  • x86_64
  • ARM 64
Kylin Euler V10 SP1/SP2
  • x86_64
  • ARM 64
UOS V20
  • x86_64
  • ARM 64
openEuler 22.03 LTS SP1x86_64
macOS 12 (Monterey) or later
  • x86_64
  • ARM 64
Oracle Enterprise Linux 7.3 or a later 7.x versionx86_64
Ubuntu LTS 18.04 or laterx86_64
CentOS 8 Stream
  • x86_64
  • ARM 64
Debian 9 (Stretch) or laterx86_64
Fedora 35 or laterx86_64
openSUSE Leap later than v15.3 (not including Tumbleweed)x86_64
SUSE Linux Enterprise Server 15x86_64
> **Note:** > @@ -169,14 +219,47 @@ As an open-source distributed SQL database, TiDB requires the following network ## Disk space requirements -| Component | Disk space requirement | Healthy disk usage | -| :-- | :-- | :-- | -| TiDB | | Lower than 90% | -| PD | At least 20 GB for the data disk and for the log disk, respectively | Lower than 90% | -| TiKV | At least 100 GB for the data disk and for the log disk, respectively | Lower than 80% | -| TiFlash | At least 100 GB for the data disk and at least 30 GB for the log disk, respectively | Lower than 80% | -| TiUP | | N/A | -| Ngmonitoring | | N/A | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ComponentDisk space requirementHealthy disk usage
TiDB
  • At least 30 GB for the log disk
  • Starting from v6.5.0, Fast Online DDL (controlled by the tidb_ddl_enable_fast_reorg variable) is enabled by default to accelerate DDL operations, such as adding indexes. If DDL operations involving large objects exist in your application, it is highly recommended to prepare additional SSD disk space for TiDB (100 GB or more). For detailed configuration instructions, see Set a temporary space for a TiDB instance
Lower than 90%
PDAt least 20 GB for the data disk and for the log disk, respectivelyLower than 90%
TiKVAt least 100 GB for the data disk and for the log disk, respectivelyLower than 80%
TiFlashAt least 100 GB for the data disk and at least 30 GB for the log disk, respectivelyLower than 80%
TiUP
  • Control machine: No more than 1 GB space is required for deploying a TiDB cluster of a single version. The space required increases if TiDB clusters of multiple versions are deployed.
  • Deployment servers (machines where the TiDB components run): TiFlash occupies about 700 MB space and other components (such as PD, TiDB, and TiKV) occupy about 200 MB space respectively. During the cluster deployment process, the TiUP cluster requires less than 1 MB of temporary space (/tmp directory) to store temporary files.
N/A
Ngmonitoring
  • Conprof: 3 x 1 GB x Number of components (each component occupies about 1 GB per day, 3 days in total) + 20 GB reserved space
  • Top SQL: 30 x 50 MB x Number of components (each component occupies about 50 MB per day, 30 days in total)
  • Conprof and Top SQL share the reserved space
N/A
## Web browser requirements diff --git a/migration-tools.md b/migration-tools.md index e1ee7d14cbdf0..69ee82bd61bb3 100644 --- a/migration-tools.md +++ b/migration-tools.md @@ -15,57 +15,75 @@ This document introduces the user scenarios, supported upstreams and downstreams ## [TiDB Data Migration (DM)](/dm/dm-overview.md) -| User scenario |Data migration from MySQL-compatible databases to TiDB| -|---|---| -| **Upstream** | MySQL, MariaDB, Aurora | -| **Downstream** | TiDB | -| **Advantages** | | -| **Limitation** | Data import speed is roughly the same as that of TiDB Lightning's [logical import mode](/tidb-lightning/tidb-lightning-logical-import-mode.md), and a lot lower than that of TiDB Lightning's [physical import mode](/tidb-lightning/tidb-lightning-physical-import-mode.md). So it is recommended to use DM to migrate full data with a size of less than 1 TiB. | +- **User scenario**: Data migration from MySQL-compatible databases to TiDB +- **Upstream**: MySQL, MariaDB, Aurora +- **Downstream**: TiDB +- **Advantages**: + - A convenient and unified data migration task management tool that supports full data migration and incremental replication + - Support filtering tables and operations + - Support shard merge and migration +- **Limitation**: Data import speed is roughly the same as that of TiDB Lightning's [logical import mode](/tidb-lightning/tidb-lightning-logical-import-mode.md), and a lot lower than that of TiDB Lightning's [physical import mode](/tidb-lightning/tidb-lightning-physical-import-mode.md). So it is recommended to use DM to migrate full data with a size of less than 1 TiB. ## [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) -| User scenario | Full data import into TiDB | -|---|---| -| **Upstream (the imported source file)** | | -| **Downstream** | TiDB | -| **Advantages** | | -| **Limitation** | | +- **User scenario**: Full data import into TiDB +- **Upstream (the imported source file)**: + - Files exported from Dumpling + - Parquet files exported by Amazon Aurora or Apache Hive + - CSV files + - Data from local disks or Amazon S3 +- **Downstream**: TiDB +- **Advantages**: + - Support quickly importing a large amount of data and quickly initializing a specific table in a TiDB cluster + - Support checkpoints to store the import progress, so that `tidb-lightning` continues importing from where it lefts off after restarting + - Support data filtering +- **Limitation**: + - If [physical import mode](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md) is used for data import, during the import process, the TiDB cluster cannot provide services. + - If you do not want the TiDB services to be impacted, perform the data import according to TiDB Lightning [logical import mode](/tidb-lightning/tidb-lightning-logical-import-mode-usage.md). ## [Dumpling](/dumpling-overview.md) -| User scenario | Full data export from MySQL or TiDB | -|---|---| -| **Upstream** | MySQL, TiDB | -| **Downstream (the output file)** | SQL, CSV | -| **Advantages** | | -| **Limitation** | | +- **User scenario**: Full data export from MySQL or TiDB +- **Upstream**: MySQL, TiDB +- **Downstream (the output file)**: SQL, CSV +- **Advantages**: + - Support the table-filter feature that enables you to filter data easier + - Support exporting data to Amazon S3 +- **Limitation**: + - If you want to restore the exported data to a database other than TiDB, it is recommended to use Dumpling. + - If you want to restore the exported data to another TiDB cluster, it is recommended to use Backup & Restore (BR). ## [TiCDC](/ticdc/ticdc-overview.md) -| User scenario | This tool is implemented by pulling TiKV change logs. It can restore cluster data to a consistent state with any upstream TSO, and support other systems to subscribe to data changes. | -|---|---| -| **Upstream** | TiDB | -| **Downstream** | TiDB, MySQL, Kafka, Confluent | -| **Advantages** | Provide TiCDC Open Protocol | -| **Limitation** | TiCDC only replicates tables that have at least one valid index. The following scenarios are not supported: | +- **User scenario**: This tool is implemented by pulling TiKV change logs. It can restore cluster data to a consistent state with any upstream TSO, and support other systems to subscribe to data changes. +- **Upstream**: TiDB +- **Downstream**: TiDB, MySQL, Kafka, Confluent +- **Advantages**: Provide TiCDC Open Protocol +- **Limitation**: TiCDC only replicates tables that have at least one valid index. The following scenarios are not supported: + - The TiKV cluster that uses RawKV alone. + - The DDL operation `CREATE SEQUENCE` and the `SEQUENCE` function in TiDB. ## [Backup & Restore (BR)](/br/backup-and-restore-overview.md) -| User scenario | Migrate a large amount of TiDB cluster data by backing up and restoring data | -|---|---| -| **Upstream** | TiDB | -| **Downstream (the output file)** | SST, backup.meta files, backup.lock files | -| **Advantages** | | -| **Limitation** | | +- **User scenario**: Migrate a large amount of TiDB cluster data by backing up and restoring data +- **Upstream**: TiDB +- **Downstream (the output file)**: SST, backup.meta files, backup.lock files +- **Advantages**: + - Suitable for migrating data to another TiDB cluster + - Support backing up data to an external storage for disaster recovery +- **Limitation**: + - When BR restores data to the upstream cluster of TiCDC or Drainer, the restored data cannot be replicated to the downstream by TiCDC or Drainer. + - BR supports operations only between clusters that have the same `new_collations_enabled_on_first_bootstrap` value. ## [sync-diff-inspector](/sync-diff-inspector/sync-diff-inspector-overview.md) -| User scenario | Comparing data stored in the databases with the MySQL protocol | -|---|---| -| **Upstream** | TiDB, MySQL | -| **Downstream** | TiDB, MySQL | -| **Advantages** | Can be used to repair data in the scenario where a small amount of data is inconsistent | -| **Limitation** | | +- **User scenario**: Comparing data stored in the databases with the MySQL protocol +- **Upstream**: TiDB, MySQL +- **Downstream**: TiDB, MySQL +- **Advantages**: Can be used to repair data in the scenario where a small amount of data is inconsistent +- **Limitation**: + - Online check is not supported for data migration between MySQL and TiDB. + - JSON, BIT, BINARY, BLOB and other types of data are not supported. ## Install tools using TiUP diff --git a/releases/release-5.4.0.md b/releases/release-5.4.0.md index bf8fd2a37a474..396693e162c44 100644 --- a/releases/release-5.4.0.md +++ b/releases/release-5.4.0.md @@ -30,19 +30,72 @@ In v5.4, the key new features or improvements are as follows: ### System variables -| Variable name | Change type | Description | -| :---------- | :----------- | :----------- | -| [`tidb_enable_column_tracking`](/system-variables.md#tidb_enable_column_tracking-new-in-v540) | Newly added | Controls whether to allow TiDB to collect `PREDICATE COLUMNS`. The default value is `OFF`. | -| [`tidb_enable_paging`](/system-variables.md#tidb_enable_paging-new-in-v540) | Newly added | Controls whether to use the method of paging to send coprocessor requests in `IndexLookUp` operator. The default value is `OFF`.
For read queries that use `IndexLookup` and `Limit` and that `Limit` cannot be pushed down to `IndexScan`, there might be high latency for the read queries and high CPU usage for TiKV's `unified read pool`. In such cases, because the `Limit` operator only requires a small set of data, if you set `tidb_enable_paging` to `ON`, TiDB processes less data, which reduces query latency and resource consumption. | -| [`tidb_enable_top_sql`](/system-variables.md#tidb_enable_top_sql-new-in-v540) | Newly added | Controls whether to enable the Top SQL feature. The default value is `OFF`. | -| [`tidb_persist_analyze_options`](/system-variables.md#tidb_persist_analyze_options-new-in-v540) | Newly added | Controls whether to enable the [ANALYZE configuration persistence](/statistics.md#persist-analyze-configurations) feature. The default value is `ON`. | -| [`tidb_read_staleness`](/system-variables.md#tidb_read_staleness-new-in-v540) | Newly added | Controls the range of historical data that can be read in the current session. The default value is `0`.| -| [`tidb_regard_null_as_point`](/system-variables.md#tidb_regard_null_as_point-new-in-v540) | Newly added | Controls whether the optimizer can use a query condition including null equivalence as a prefix condition for index access. | -| [`tidb_stats_load_sync_wait`](/system-variables.md#tidb_stats_load_sync_wait-new-in-v540) | Newly added | Controls whether to enable the synchronously loading statistics feature. The default value `0` means that the feature is disabled and that the statistics is asynchronously loaded. When the feature is enabled, this variable controls the maximum time that SQL optimization can wait for synchronously loading statistics before timeout. | -| [`tidb_stats_load_pseudo_timeout`](/system-variables.md#tidb_stats_load_pseudo_timeout-new-in-v540) | Newly added | Controls when synchronously loading statistics reaches timeout, whether SQL fails (`OFF`) or falls back to using pseudo statistics (`ON`). The default value is `OFF`. | -| [`tidb_backoff_lock_fast`](/system-variables.md#tidb_backoff_lock_fast) | Modified | The default value is changed from `100` to `10`. | -| [`tidb_enable_index_merge`](/system-variables.md#tidb_enable_index_merge-new-in-v40) | Modified | The default value is changed from `OFF` to `ON`.
| -| [`tidb_store_limit`](/system-variables.md#tidb_store_limit-new-in-v304-and-v40) | Modified | Before v5.4.0, this variable can be configured at instance level and globally. Starting from v5.4.0, this variable only supports global configuration. | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variable nameChange typeDescription
tidb_enable_column_trackingNewly addedControls whether to allow TiDB to collect PREDICATE COLUMNS. The default value is OFF.
tidb_enable_pagingNewly addedControls whether to use the method of paging to send coprocessor requests in IndexLookUp operator. The default value is OFF.
For read queries that use IndexLookup and Limit and that Limit cannot be pushed down to IndexScan, there might be high latency for the read queries and high CPU usage for TiKV's unified read pool. In such cases, because the Limit operator only requires a small set of data, if you set tidb_enable_paging to ON, TiDB processes less data, which reduces query latency and resource consumption.
tidb_enable_top_sqlNewly addedControls whether to enable the Top SQL feature. The default value is OFF.
tidb_persist_analyze_optionsNewly addedControls whether to enable the ANALYZE configuration persistence feature. The default value is ON.
tidb_read_stalenessNewly addedControls the range of historical data that can be read in the current session. The default value is 0.
tidb_regard_null_as_pointNewly addedControls whether the optimizer can use a query condition including null equivalence as a prefix condition for index access.
tidb_stats_load_sync_waitNewly addedControls whether to enable the synchronously loading statistics feature. The default value 0 means that the feature is disabled and that the statistics is asynchronously loaded. When the feature is enabled, this variable controls the maximum time that SQL optimization can wait for synchronously loading statistics before timeout.
tidb_stats_load_pseudo_timeoutNewly addedControls when synchronously loading statistics reaches timeout, whether SQL fails (OFF) or falls back to using pseudo statistics (ON). The default value is OFF.
tidb_backoff_lock_fastModifiedThe default value is changed from 100 to 10.
tidb_enable_index_mergeModifiedThe default value is changed from OFF to ON.
  • If you upgrade a TiDB cluster from versions earlier than v4.0.0 to v5.4.0 or later, this variable is OFF by default.
  • If you upgrade a TiDB cluster from v4.0.0 or later to v5.4.0 or later, this variable remains the same as before the upgrade.
  • For the newly created TiDB clusters of v5.4.0 and later, this variable is ON by default.
tidb_store_limitModifiedBefore v5.4.0, this variable can be configured at instance level and globally. Starting from v5.4.0, this variable only supports global configuration.
### Configuration file parameters diff --git a/releases/release-6.0.0-dmr.md b/releases/release-6.0.0-dmr.md index 9b8c8211fc528..73198ae4f28a6 100644 --- a/releases/release-6.0.0-dmr.md +++ b/releases/release-6.0.0-dmr.md @@ -286,49 +286,254 @@ TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. ### System variables -| Variable name | Change type | Description | -|:---|:---|:---| -| `placement_checks` | Deleted | Controls whether the DDL statement validates the placement rules specified by [Placement Rules in SQL](/placement-rules-in-sql.md). Replaced by `tidb_placement_mode`. | -| `tidb_enable_alter_placement` | Deleted | Controls whether to enable [placement rules in SQL](/placement-rules-in-sql.md). | -| `tidb_mem_quota_hashjoin`
`tidb_mem_quota_indexlookupjoin`
`tidb_mem_quota_indexlookupreader`
`tidb_mem_quota_mergejoin`
`tidb_mem_quota_sort`
`tidb_mem_quota_topn` | Deleted | Since v5.0, these variables have been replaced by `tidb_mem_quota_query` and removed from the [system variables](/system-variables.md) document. To ensure compatibility, these variables were kept in source code. Since TiDB 6.0.0, these variables are removed from the code, too. | -| [`tidb_enable_mutation_checker`](/system-variables.md#tidb_enable_mutation_checker-new-in-v600) | Newly added | Controls whether to enable the mutation checker. The default value is `ON`. For existing clusters that upgrade from versions earlier than v6.0.0, the mutation checker is disabled by default. | -| [`tidb_ignore_prepared_cache_close_stmt`](/system-variables.md#tidb_ignore_prepared_cache_close_stmt-new-in-v600) | Newly added | Controls whether to ignore the command that closes Prepared Statement. The default value is `OFF`. | -| [`tidb_mem_quota_binding_cache`](/system-variables.md#tidb_mem_quota_binding_cache-new-in-v600) | Newly added | Sets the memory usage threshold for the cache holding `binding`. The default value is `67108864` (64 MiB). | -| [`tidb_placement_mode`](/system-variables.md#tidb_placement_mode-new-in-v600) | Newly added | Controls whether DDL statements ignore the placement rules specified by [Placement Rules in SQL](/placement-rules-in-sql.md). The default value is `strict`, which means that DDL statements do not ignore placement rules. | -| [`tidb_rc_read_check_ts`](/system-variables.md#tidb_rc_read_check_ts-new-in-v600) | Newly added | | -| [`tidb_sysdate_is_now`](/system-variables.md#tidb_sysdate_is_now-new-in-v600) | Newly added | Controls whether the `SYSDATE` function can be replaced by the `NOW` function. This configuration item has the same effect as the MySQL option [`sysdate-is-now`](https://dev.mysql.com/doc/refman/8.0/en/server-options.html#option_mysqld_sysdate-is-now). The default value is `OFF`. | -| [`tidb_table_cache_lease`](/system-variables.md#tidb_table_cache_lease-new-in-v600) | Newly added | Controls the lease time of [table cache](/cached-tables.md), in seconds. The default value is `3`. | -| [`tidb_top_sql_max_meta_count`](/system-variables.md#tidb_top_sql_max_meta_count-new-in-v600) | Newly added | Controls the maximum number of SQL statement types collected by [Top SQL](/dashboard/top-sql.md) per minute. The default value is `5000`. | -| [`tidb_top_sql_max_time_series_count`](/system-variables.md#tidb_top_sql_max_time_series_count-new-in-v600) | Newly added | Controls how many SQL statements that contribute the most to the load (that is, top N) can be recorded by [Top SQL](/dashboard/top-sql.md) per minute. The default value is `100`. | -| [`tidb_txn_assertion_level`](/system-variables.md#tidb_txn_assertion_level-new-in-v600) | Newly added | Controls the assertion level. The assertion is a consistency check between data and indexes, which checks whether a key being written exists in the transaction commit process. By default, the check enables most of the check items, with almost no impact on performance. For existing clusters that upgrade from versions earlier than v6.0.0, the check is disabled by default. | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variable nameChange typeDescription
placement_checksDeletedControls whether the DDL statement validates the placement rules specified by Placement Rules in SQL. Replaced by tidb_placement_mode.
tidb_enable_alter_placementDeletedControls whether to enable placement rules in SQL.
+ tidb_mem_quota_hashjoin
+ tidb_mem_quota_indexlookupjoin
+ tidb_mem_quota_indexlookupreader
+ tidb_mem_quota_mergejoin
+ tidb_mem_quota_sort
+ tidb_mem_quota_topn +
DeletedSince v5.0, these variables have been replaced by tidb_mem_quota_query and removed from the system variables document. To ensure compatibility, these variables were kept in source code. Since TiDB 6.0.0, these variables are removed from the code, too.
tidb_enable_mutation_checkerNewly addedControls whether to enable the mutation checker. The default value is ON. For existing clusters that upgrade from versions earlier than v6.0.0, the mutation checker is disabled by default.
tidb_ignore_prepared_cache_close_stmtNewly addedControls whether to ignore the command that closes Prepared Statement. The default value is OFF.
tidb_mem_quota_binding_cacheNewly addedSets the memory usage threshold for the cache holding binding. The default value is 67108864 (64 MiB).
tidb_placement_modeNewly addedControls whether DDL statements ignore the placement rules specified by Placement Rules in SQL. The default value is strict, which means that DDL statements do not ignore placement rules.
tidb_rc_read_check_tsNewly added +
    +
  • Optimizes read statement latency within a transaction. If read/write conflicts are more severe, turning this variable on will add additional overhead and latency, causing regressions in performance. The default value is off.
  • +
  • This variable is not yet compatible with replica-read. If a read request has tidb_rc_read_check_ts on, it might not be able to use replica-read. Do not turn on both variables at the same time.
  • +
+
tidb_sysdate_is_nowNewly addedControls whether the SYSDATE function can be replaced by the NOW function. This configuration item has the same effect as the MySQL option sysdate-is-now. The default value is OFF.
tidb_table_cache_leaseNewly addedControls the lease time of table cache, in seconds. The default value is 3.
tidb_top_sql_max_meta_countNewly addedControls the maximum number of SQL statement types collected by Top SQL per minute. The default value is 5000.
tidb_top_sql_max_time_series_countNewly addedControls how many SQL statements that contribute the most to the load (that is, top N) can be recorded by Top SQL per minute. The default value is 100.
tidb_txn_assertion_levelNewly addedControls the assertion level. The assertion is a consistency check between data and indexes, which checks whether a key being written exists in the transaction commit process. By default, the check enables most of the check items, with almost no impact on performance. For existing clusters that upgrade from versions earlier than v6.0.0, the check is disabled by default.
### Configuration file parameters -| Configuration file | Configuration | Change type | Description | -|:---|:---|:---|:---| -| TiDB | `stmt-summary.enable`
`stmt-summary.enable-internal-query`
`stmt-summary.history-size`
`stmt-summary.max-sql-length`
`stmt-summary.max-stmt-count`
`stmt-summary.refresh-interval` | Deleted | Configuration related to the [statement summary tables](/statement-summary-tables.md). All these configuration items are removed. You need to use SQL variables to control the statement summary tables. | -| TiDB | [`new_collations_enabled_on_first_bootstrap`](/tidb-configuration-file.md#new_collations_enabled_on_first_bootstrap) | Modified | Controls whether to enable support for the new collation. Since v6.0, the default value is changed from `false` to `true`. This configuration item only takes effect when the cluster is initialized for the first time. After the first bootstrap, you cannot enable or disable the new collation framework using this configuration item. | -| TiKV | [`backup.num-threads`](/tikv-configuration-file.md#num-threads-1) | Modified | The value range is modified to `[1, CPU]`. | -| TiKV | [`raftstore.apply-max-batch-size`](/tikv-configuration-file.md#apply-max-batch-size) | Modified | The maximum value is changed to `10240`. | -| TiKV | [`raftstore.raft-max-size-per-msg`](/tikv-configuration-file.md#raft-max-size-per-msg) | Modified | | -| TiKV | [`raftstore.store-max-batch-size`](/tikv-configuration-file.md#store-max-batch-size) | Modified | The maximum value is set to `10240`. | -| TiKV | [`readpool.unified.max-thread-count`](/tikv-configuration-file.md#max-thread-count) | Modified | The adjustable range is changed to `[min-thread-count, MAX(4, CPU)]`. | -| TiKV | [`rocksdb.enable-pipelined-write`](/tikv-configuration-file.md#enable-pipelined-write) | Modified | The default value is changed from `true` to `false`. When this configuration is enabled, the previous Pipelined Write is used. When this configuration is disabled, the new Pipelined Commit mechanism is used. | -| TiKV | [`rocksdb.max-background-flushes`](/tikv-configuration-file.md#max-background-flushes) | Modified | | -| TiKV | [`rocksdb.max-background-jobs`](/tikv-configuration-file.md#max-background-jobs) | Modified | | -| TiFlash | [`profiles.default.dt_enable_logical_split`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Determines whether the segment of DeltaTree Storage Engine uses logical split. The default value is changed from `true` to `false`. | -| TiFlash | [`profiles.default.enable_elastic_threadpool`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Controls whether to enable the elastic thread pool. The default value is changed from `false` to `true`. | -| TiFlash | [`storage.format_version`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Modified | Controls the data validation feature of TiFlash. The default value is changed from `2` to `3`.
When `format_version` is set to `3`, consistency check is performed on the read operations for all TiFlash data to avoid incorrect read due to hardware failure.
Note that the new format version cannot be downgraded in place to versions earlier than v5.4. | -| TiDB | [`pessimistic-txn.pessimistic-auto-commit`](/tidb-configuration-file.md#pessimistic-auto-commit-new-in-v600) | Newly added | Determines the transaction mode that the auto-commit transaction uses when the pessimistic transaction mode is globally enabled (`tidb_txn_mode='pessimistic'`). | -| TiKV | [`pessimistic-txn.in-memory`](/tikv-configuration-file.md#in-memory-new-in-v600) | Newly added | Controls whether to enable the in-memory pessimistic lock. With this feature enabled, pessimistic transactions store pessimistic locks in TiKV memory as much as possible, instead of writing pessimistic locks to disks or replicating to other replicas. This improves the performance of pessimistic transactions; however, there is a low probability that a pessimistic lock will be lost, which might cause the pessimistic transaction to fail to commit. The default value is `true`. | -| TiKV | [`quota`](/tikv-configuration-file.md#quota) | Newly added | Add configuration items related to Quota Limiter, which limit the resources occupied by frontend requests. Quota Limiter is an experimental feature and is disabled by default. New quota-related configuration items are `foreground-cpu-time`, `foreground-write-bandwidth`, `foreground-read-bandwidth`, and `max-delay-duration`. | -| TiFlash | [`profiles.default.dt_compression_method`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Newly added | Specifies the compression algorithm for TiFlash. The optional values are `LZ4`, `zstd` and `LZ4HC`, all case insensitive. The default value is `LZ4`. | -| TiFlash | [`profiles.default.dt_compression_level`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Newly added | Specifies the compression level of TiFlash. The default value is `1`. | -| DM | [`loaders..import-mode`](/dm/task-configuration-file-full.md#task-configuration-file-template-advanced) | Newly added | The import mode during the full import phase. Since v6.0, DM uses TiDB Lightning's TiDB-backend mode to import data during the full import phase; the previous Loader component is no longer used. This is an internal replacement and has no obvious impact on daily operations.
The default value is set to `sql`, which means using `tidb-backend` mode. In some rare cases, `tidb-backend` might not be fully compatible. You can fall back to Loader mode by configuring this parameter to `loader`. | -| DM | [`loaders..on-duplicate`](/dm/task-configuration-file-full.md#task-configuration-file-template-advanced) | Newly added | Specifies the methods to resolve conflicts during the full import phase. The default value is `replace`, which means using the new data to replace the existing data. | -| TiCDC | [`dial-timeout`](/ticdc/ticdc-sink-to-kafka.md#configure-sink-uri-for-kafka) | Newly added | The timeout in establishing a connection with the downstream Kafka. The default value is `10s`. | -| TiCDC | [`read-timeout`](/ticdc/ticdc-sink-to-kafka.md#configure-sink-uri-for-kafka) | Newly added | The timeout in getting a response returned by the downstream Kafka. The default value is `10s`. | -| TiCDC | [`write-timeout`](/ticdc/ticdc-sink-to-kafka.md#configure-sink-uri-for-kafka) | Newly added | The timeout in sending a request to the downstream Kafka. The default value is `10s`. | + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Configuration fileConfigurationChange typeDescription
TiDB + stmt-summary.enable
+ stmt-summary.enable-internal-query
+ stmt-summary.history-size
+ stmt-summary.max-sql-length
+ stmt-summary.max-stmt-count
+ stmt-summary.refresh-interval +
DeletedConfiguration related to the statement summary tables. All these configuration items are removed. You need to use SQL variables to control the statement summary tables.
TiDBnew_collations_enabled_on_first_bootstrapModifiedControls whether to enable support for the new collation. Since v6.0, the default value is changed from false to true. This configuration item only takes effect when the cluster is initialized for the first time. After the first bootstrap, you cannot enable or disable the new collation framework using this configuration item.
TiKVbackup.num-threadsModifiedThe value range is modified to [1, CPU].
TiKVraftstore.apply-max-batch-sizeModifiedThe maximum value is changed to 10240.
TiKVraftstore.raft-max-size-per-msgModifiedThe minimum value is changed from 0 to larger than 0.
The maximum value is set to 3GB.
The unit is changed from MB to KB|MB|GB.
TiKVraftstore.store-max-batch-sizeModifiedThe maximum value is set to 10240.
TiKVreadpool.unified.max-thread-countModifiedThe adjustable range is changed to [min-thread-count, MAX(4, CPU)].
TiKVrocksdb.enable-pipelined-writeModifiedThe default value is changed from true to false. When this configuration is enabled, the previous Pipelined Write is used. When this configuration is disabled, the new Pipelined Commit mechanism is used.
TiKVrocksdb.max-background-flushesModifiedWhen the number of CPU cores is 10, the default value is 3.
When the number of CPU cores is 8, the default value is 2.
TiKVrocksdb.max-background-jobsModifiedWhen the number of CPU cores is 10, the default value is 9.
When the number of CPU cores is 8, the default value is 7.
TiFlashprofiles.default.dt_enable_logical_splitModifiedDetermines whether the segment of DeltaTree Storage Engine uses logical split. The default value is changed from true to false.
TiFlashprofiles.default.enable_elastic_threadpoolModifiedControls whether to enable the elastic thread pool. The default value is changed from false to true.
TiFlashstorage.format_versionModifiedControls the data validation feature of TiFlash. The default value is changed from 2 to 3.
When format_version is set to 3, consistency check is performed on the read operations for all TiFlash data to avoid incorrect read due to hardware failure.
Note that the new format version cannot be downgraded in place to versions earlier than v5.4.
TiDBpessimistic-txn.pessimistic-auto-commitNewly addedDetermines the transaction mode that the auto-commit transaction uses when the pessimistic transaction mode is globally enabled (tidb_txn_mode='pessimistic').
TiKVpessimistic-txn.in-memoryNewly addedControls whether to enable the in-memory pessimistic lock. With this feature enabled, pessimistic transactions store pessimistic locks in TiKV memory as much as possible, instead of writing pessimistic locks to disks or replicating to other replicas. This improves the performance of pessimistic transactions; however, there is a low probability that a pessimistic lock will be lost, which might cause the pessimistic transaction to fail to commit. The default value is true.
TiKVquotaNewly addedAdd configuration items related to Quota Limiter, which limit the resources occupied by frontend requests. Quota Limiter is an experimental feature and is disabled by default. New quota-related configuration items are foreground-cpu-time, foreground-write-bandwidth, foreground-read-bandwidth, and max-delay-duration.
TiFlashprofiles.default.dt_compression_methodNewly addedSpecifies the compression algorithm for TiFlash. The optional values are LZ4, zstd and LZ4HC, all case insensitive. The default value is LZ4.
TiFlashprofiles.default.dt_compression_levelNewly addedSpecifies the compression level of TiFlash. The default value is 1.
DMloaders.<name>.import-modeNewly addedThe import mode during the full import phase. Since v6.0, DM uses TiDB Lightning's TiDB-backend mode to import data during the full import phase; the previous Loader component is no longer used. This is an internal replacement and has no obvious impact on daily operations.
The default value is set to sql, which means using tidb-backend mode. In some rare cases, tidb-backend might not be fully compatible. You can fall back to Loader mode by configuring this parameter to loader.
DMloaders.<name>.on-duplicateNewly addedSpecifies the methods to resolve conflicts during the full import phase. The default value is replace, which means using the new data to replace the existing data.
TiCDCdial-timeoutNewly addedThe timeout in establishing a connection with the downstream Kafka. The default value is 10s.
TiCDCread-timeoutNewly addedThe timeout in getting a response returned by the downstream Kafka. The default value is 10s.
TiCDCwrite-timeoutNewly addedThe timeout in sending a request to the downstream Kafka. The default value is 10s.
### Others diff --git a/releases/release-6.1.0.md b/releases/release-6.1.0.md index 4c838c63bc3f5..4b555be897299 100644 --- a/releases/release-6.1.0.md +++ b/releases/release-6.1.0.md @@ -172,7 +172,9 @@ In 6.1.0, the key new features or improvements are as follows: * Data is scoped according to different usage and supports co-existence of a single TiDB cluster, Transactional KV, RawKV applications. + Due to significant changes in the underlying storage format, after enabling API V2, you cannot roll back a TiKV cluster to a version earlier than v6.1.0. Downgrading TiKV might result in data corruption. + [User document](/tikv-configuration-file.md#api-version-new-in-v610), [#11745](https://github.com/tikv/tikv/issues/11745) diff --git a/releases/versioning.md b/releases/versioning.md index a87346f18dc49..63d933739a128 100644 --- a/releases/versioning.md +++ b/releases/versioning.md @@ -6,7 +6,9 @@ summary: Learn the version numbering system of TiDB. # TiDB Versioning + It is recommended to always upgrade to the latest patch release of your release series. + TiDB offers two release series: @@ -46,7 +48,9 @@ Example version: - 6.1.1 + v5.1.0, v5.2.0, v5.3.0, v5.4.0 were released only two months after their preceding releases, but all four releases are LTS and provide patch releases. + ## Development Milestone Releases diff --git a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md index 6f7eab53a753e..ad728d54d0071 100644 --- a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md +++ b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md @@ -63,9 +63,12 @@ Before you begin to create an endpoint: - Prepare the following [IAM roles](https://cloud.google.com/iam/docs/understanding-roles) with the permissions needed to create an endpoint. - | Tasks | Required IAM Roles | - |---|---| - | | | + - Tasks: + - Create an endpoint + - Automatically or manually configure [DNS entries](https://cloud.google.com/vpc/docs/configure-private-service-connect-services#dns-endpoint) for an endpoint + - Required IAM roles: + - [Compute Network Admin](https://cloud.google.com/iam/docs/understanding-roles#compute.networkAdmin) (roles/compute.networkAdmin) + - [Service Directory Editor](https://cloud.google.com/iam/docs/understanding-roles#servicedirectory.editor) (roles/servicedirectory.editor) Perform the following steps to go to the **Google Cloud Private Endpoint** page: diff --git a/tidb-lightning/tidb-lightning-overview.md b/tidb-lightning/tidb-lightning-overview.md index 1a0625e0c02bc..16afb97a59c5f 100644 --- a/tidb-lightning/tidb-lightning-overview.md +++ b/tidb-lightning/tidb-lightning-overview.md @@ -42,5 +42,7 @@ TiDB Lightning supports two import modes, configured by `backend`. The import mo | Whether the TiDB cluster can provide service during import | [Limited service](/tidb-lightning/tidb-lightning-physical-import-mode.md#limitations) | Yes | + The preceding performance data is used to compare the import performance difference between the two modes. The actual import speed is affected by various factors such as hardware configuration, table schema, and the number of indexes. + diff --git a/tidb-lightning/tidb-lightning-physical-import-mode-usage.md b/tidb-lightning/tidb-lightning-physical-import-mode-usage.md index 8f57dd97aa0e0..5918ebf39f9cf 100644 --- a/tidb-lightning/tidb-lightning-physical-import-mode-usage.md +++ b/tidb-lightning/tidb-lightning-physical-import-mode-usage.md @@ -124,7 +124,7 @@ The new version of conflict detection has the following limitations: - Before importing, TiDB Lightning prechecks potential conflicting data by reading all data and encoding it. During the detection process, TiDB Lightning uses `tikv-importer.sorted-kv-dir` to store temporary files. After the detection is complete, TiDB Lightning retains the results for import phase. This introduces additional overhead for time consumption, disk space usage, and API requests to read the data. - The new version of conflict detection only works in a single node, and does not apply to parallel imports and scenarios where the `disk-quota` parameter is enabled. -- The new version (`conflict`) and old version (`tikv-importer.duplicate-resolution`) conflict detection cannot be used at the same time. The new version of conflict detection is enabled when the configuration [`conflict.strategy`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) is set. +- The new version (`conflict`) and old version (`tikv-importer.duplicate-resolution`) conflict detection cannot be used at the same time. The new version of conflict detection is enabled when the configuration [`conflict.strategy`](/tidb-lightning/tidb-lightning-configuration.md#tidb-lightning-task) is set. Compared with the old version of conflict detection, the new version takes less time when the imported data contains a large amount of conflicting data. It is recommended that you use the new version of conflict detection in non-parallel import tasks when the data contains conflicting data and there is sufficient local disk space. @@ -187,9 +187,11 @@ Starting from v6.2.0, TiDB Lightning implements a mechanism to limit the impact Starting from v7.1.0, you can control the scope of pausing scheduling by using the TiDB Lightning parameter [`pause-pd-scheduler-scope`](/tidb-lightning/tidb-lightning-configuration.md). The default value is `"table"`, which means that the scheduling is paused only for the Region that stores the target table data. When there is no business traffic in the cluster, it is recommended to set this parameter to `"global"` to avoid interference from other scheduling during the import. + TiDB Lightning does not support importing data into a table that already contains data. The TiDB cluster must be v6.1.0 or later versions. For earlier versions, TiDB Lightning keeps the old behavior, which pauses scheduling globally and severely impacts the online application during the import. + By default, TiDB Lightning pauses the cluster scheduling for the minimum range possible. However, under the default configuration, the cluster performance still might be affected by fast import. To avoid this, you can configure the following options to control the import speed and other factors that might impact the cluster performance: diff --git a/tiup/tiup-component-dm-import.md b/tiup/tiup-component-dm-import.md index ccfd6b98918bc..d238adec21c1b 100644 --- a/tiup/tiup-component-dm-import.md +++ b/tiup/tiup-component-dm-import.md @@ -5,7 +5,9 @@ title: tiup dm import # tiup dm import Only for upgrading DM v1.0 + This command is used only for upgrading DM clusters from v1.0 to v2.0 or later versions. + In DM v1.0, the cluster is basically deployed using TiDB Ansible. TiUP DM provides the `import` command to import v1.0 clusters and redeploy the clusters in DM v2.0.