From c01111ea3c6345951dddda18c75c8d8bf5554757 Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 21 Oct 2024 15:32:06 +0800 Subject: [PATCH 1/8] v8.4: bump the latest TiDB version to v8.4.0 Signed-off-by: Aolin --- br/backup-and-restore-use-cases.md | 6 ++-- certificate-authentication.md | 2 +- ...-guide-sample-application-nodejs-mysql2.md | 2 +- ...guide-sample-application-nodejs-mysqljs.md | 2 +- ...-guide-sample-application-nodejs-prisma.md | 2 +- ...guide-sample-application-nodejs-typeorm.md | 2 +- ...ev-guide-sample-application-ruby-mysql2.md | 2 +- ...dev-guide-sample-application-ruby-rails.md | 2 +- dm/maintain-dm-using-tiup.md | 2 +- dm/quick-start-create-task.md | 2 +- download-ecosystem-tools.md | 2 +- functions-and-operators/tidb-functions.md | 2 +- .../information-schema-tidb-servers-info.md | 2 +- pd-control.md | 2 +- ...erformance-schema-session-connect-attrs.md | 2 +- post-installation-check.md | 2 +- production-deployment-using-tiup.md | 6 ++-- quick-start-with-tidb.md | 10 +++--- scale-microservices-using-tiup.md | 2 +- scale-tidb-using-tiup.md | 2 +- system-variables.md | 2 +- ticdc/deploy-ticdc.md | 4 +-- ticdc/ticdc-changefeed-config.md | 2 +- ticdc/ticdc-changefeed-overview.md | 2 +- ticdc/ticdc-manage-changefeed.md | 2 +- ticdc/ticdc-open-api-v2.md | 2 +- ticdc/ticdc-sink-to-cloud-storage.md | 2 +- ticdc/ticdc-sink-to-pulsar.md | 2 +- tidb-monitoring-api.md | 2 +- tiflash/create-tiflash-replicas.md | 4 +-- tiup/tiup-cluster.md | 30 ++++++++-------- tiup/tiup-component-cluster-deploy.md | 2 +- tiup/tiup-component-cluster-patch.md | 2 +- tiup/tiup-component-cluster-upgrade.md | 2 +- tiup/tiup-component-dm-upgrade.md | 2 +- tiup/tiup-component-management.md | 12 +++---- tiup/tiup-mirror.md | 6 ++-- tiup/tiup-playground.md | 4 +-- upgrade-monitoring-services.md | 4 +-- upgrade-tidb-using-tiup.md | 36 +++++++++---------- 40 files changed, 90 insertions(+), 90 deletions(-) diff --git a/br/backup-and-restore-use-cases.md b/br/backup-and-restore-use-cases.md index 6812173ea86fc..7fde56e5cd670 100644 --- a/br/backup-and-restore-use-cases.md +++ b/br/backup-and-restore-use-cases.md @@ -17,7 +17,7 @@ With PITR, you can satisfy the preceding requirements. ## Deploy the TiDB cluster and BR -To use PITR, you need to deploy a TiDB cluster >= v6.2.0 and update BR to the same version as the TiDB cluster. This document uses v8.3.0 as an example. +To use PITR, you need to deploy a TiDB cluster >= v6.2.0 and update BR to the same version as the TiDB cluster. This document uses v8.4.0 as an example. The following table shows the recommended hardware resources for using PITR in a TiDB cluster. @@ -44,13 +44,13 @@ Install or upgrade BR using TiUP: - Install: ```shell - tiup install br:v8.3.0 + tiup install br:v8.4.0 ``` - Upgrade: ```shell - tiup update br:v8.3.0 + tiup update br:v8.4.0 ``` ## Configure backup storage (Amazon S3) diff --git a/certificate-authentication.md b/certificate-authentication.md index db1a1ef7914df..6cb914473b473 100644 --- a/certificate-authentication.md +++ b/certificate-authentication.md @@ -361,7 +361,7 @@ The output: ``` -------------- -mysql Ver 8.3.0 for Linux on x86_64 (MySQL Community Server - GPL) +mysql Ver 8.4.0 for Linux on x86_64 (MySQL Community Server - GPL) Connection id: 1 Current database: test diff --git a/develop/dev-guide-sample-application-nodejs-mysql2.md b/develop/dev-guide-sample-application-nodejs-mysql2.md index ab0b9c9c77d1c..a8f2dc5f74ebc 100644 --- a/develop/dev-guide-sample-application-nodejs-mysql2.md +++ b/develop/dev-guide-sample-application-nodejs-mysql2.md @@ -191,7 +191,7 @@ npm start If the connection is successful, the console will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) âŗ Loading sample game data... ✅ Loaded sample game data. diff --git a/develop/dev-guide-sample-application-nodejs-mysqljs.md b/develop/dev-guide-sample-application-nodejs-mysqljs.md index 750b6f26229c2..28661cd92dcea 100644 --- a/develop/dev-guide-sample-application-nodejs-mysqljs.md +++ b/develop/dev-guide-sample-application-nodejs-mysqljs.md @@ -191,7 +191,7 @@ npm start If the connection is successful, the console will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) âŗ Loading sample game data... ✅ Loaded sample game data. diff --git a/develop/dev-guide-sample-application-nodejs-prisma.md b/develop/dev-guide-sample-application-nodejs-prisma.md index 6241603e6fec4..5e9be407196a6 100644 --- a/develop/dev-guide-sample-application-nodejs-prisma.md +++ b/develop/dev-guide-sample-application-nodejs-prisma.md @@ -270,7 +270,7 @@ void main(); If the connection is successful, the terminal will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) 🆕 Created a new player with ID 1. ℹī¸ Got Player 1: Player { id: 1, coins: 100, goods: 100 } đŸ”ĸ Added 50 coins and 50 goods to player 1, now player 1 has 150 coins and 150 goods. diff --git a/develop/dev-guide-sample-application-nodejs-typeorm.md b/develop/dev-guide-sample-application-nodejs-typeorm.md index b7949c84953c8..19401445a1772 100644 --- a/develop/dev-guide-sample-application-nodejs-typeorm.md +++ b/develop/dev-guide-sample-application-nodejs-typeorm.md @@ -233,7 +233,7 @@ npm start If the connection is successful, the terminal will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) 🆕 Created a new player with ID 2. ℹī¸ Got Player 2: Player { id: 2, coins: 100, goods: 100 } đŸ”ĸ Added 50 coins and 50 goods to player 2, now player 2 has 100 coins and 150 goods. diff --git a/develop/dev-guide-sample-application-ruby-mysql2.md b/develop/dev-guide-sample-application-ruby-mysql2.md index d09b37a3671f4..ed7aff9ea9c70 100644 --- a/develop/dev-guide-sample-application-ruby-mysql2.md +++ b/develop/dev-guide-sample-application-ruby-mysql2.md @@ -192,7 +192,7 @@ ruby app.rb If the connection is successful, the console will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) âŗ Loading sample game data... ✅ Loaded sample game data. diff --git a/develop/dev-guide-sample-application-ruby-rails.md b/develop/dev-guide-sample-application-ruby-rails.md index f58a415e1f673..7fb72310694fd 100644 --- a/develop/dev-guide-sample-application-ruby-rails.md +++ b/develop/dev-guide-sample-application-ruby-rails.md @@ -185,7 +185,7 @@ Connect to your TiDB cluster depending on the TiDB deployment option you've sele If the connection is successful, the console will output the version of the TiDB cluster as follows: ``` -🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.3.0) +🔌 Connected to TiDB cluster! (TiDB version: 8.0.11-TiDB-v8.4.0) âŗ Loading sample game data... ✅ Loaded sample game data. diff --git a/dm/maintain-dm-using-tiup.md b/dm/maintain-dm-using-tiup.md index c11fc35781c13..6da37af184f53 100644 --- a/dm/maintain-dm-using-tiup.md +++ b/dm/maintain-dm-using-tiup.md @@ -394,7 +394,7 @@ All operations above performed on the cluster machine use the SSH client embedde Then you can use the `--native-ssh` command-line flag to enable the system-native command-line tool: -- Deploy a cluster: `tiup dm deploy --native-ssh`. Fill in the name of your cluster for ``, the DM version to be deployed (such as `v8.3.0`) for `` , and the topology file name for ``. +- Deploy a cluster: `tiup dm deploy --native-ssh`. Fill in the name of your cluster for ``, the DM version to be deployed (such as `v8.4.0`) for `` , and the topology file name for ``. - Start a cluster: `tiup dm start --native-ssh`. - Upgrade a cluster: `tiup dm upgrade ... --native-ssh` diff --git a/dm/quick-start-create-task.md b/dm/quick-start-create-task.md index 6d7b364746842..8799a84c0a029 100644 --- a/dm/quick-start-create-task.md +++ b/dm/quick-start-create-task.md @@ -74,7 +74,7 @@ To run a TiDB server, use the following command: {{< copyable "shell-regular" >}} ```bash -wget https://download.pingcap.org/tidb-community-server-v8.3.0-linux-amd64.tar.gz +wget https://download.pingcap.org/tidb-community-server-v8.4.0-linux-amd64.tar.gz tar -xzvf tidb-latest-linux-amd64.tar.gz mv tidb-latest-linux-amd64/bin/tidb-server ./ ./tidb-server diff --git a/download-ecosystem-tools.md b/download-ecosystem-tools.md index a8d576e274c61..82f351088668d 100644 --- a/download-ecosystem-tools.md +++ b/download-ecosystem-tools.md @@ -28,7 +28,7 @@ You can download TiDB Toolkit from the following link: https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.gz ``` -`{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.3.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.3.0-linux-amd64.tar.gz`. +`{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.4.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.4.0-linux-amd64.tar.gz`. > **Note:** > diff --git a/functions-and-operators/tidb-functions.md b/functions-and-operators/tidb-functions.md index 481641ef2f849..2c83abad02d13 100644 --- a/functions-and-operators/tidb-functions.md +++ b/functions-and-operators/tidb-functions.md @@ -544,7 +544,7 @@ SELECT TIDB_VERSION()\G ```sql *************************** 1. row *************************** -TIDB_VERSION(): Release Version: v8.3.0 +TIDB_VERSION(): Release Version: v8.4.0 Edition: Community Git Commit Hash: 821e491a20fbab36604b36b647b5bae26a2c1418 Git Branch: HEAD diff --git a/information-schema/information-schema-tidb-servers-info.md b/information-schema/information-schema-tidb-servers-info.md index 1439cc9bc849b..da12c5d186071 100644 --- a/information-schema/information-schema-tidb-servers-info.md +++ b/information-schema/information-schema-tidb-servers-info.md @@ -50,7 +50,7 @@ The output is as follows: PORT: 4000 STATUS_PORT: 10080 LEASE: 45s - VERSION: 8.0.11-TiDB-v8.3.0 + VERSION: 8.0.11-TiDB-v8.4.0 GIT_HASH: 827d8ff2d22ac4c93ae1b841b79d468211e1d393 BINLOG_STATUS: Off LABELS: diff --git a/pd-control.md b/pd-control.md index d1cc7a1af42ac..6bd19059fbaaa 100644 --- a/pd-control.md +++ b/pd-control.md @@ -29,7 +29,7 @@ To obtain `pd-ctl` of the latest version, download the TiDB server installation > **Note:** > -> `{version}` in the link indicates the version number of TiDB. For example, the download link for `v8.3.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-server-v8.3.0-linux-amd64.tar.gz`. +> `{version}` in the link indicates the version number of TiDB. For example, the download link for `v8.4.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-server-v8.4.0-linux-amd64.tar.gz`. ### Compile from source code diff --git a/performance-schema/performance-schema-session-connect-attrs.md b/performance-schema/performance-schema-session-connect-attrs.md index 3005f2ab1f9a6..eafdc06e924c9 100644 --- a/performance-schema/performance-schema-session-connect-attrs.md +++ b/performance-schema/performance-schema-session-connect-attrs.md @@ -52,7 +52,7 @@ TABLE SESSION_CONNECT_ATTRS; | PROCESSLIST_ID | ATTR_NAME | ATTR_VALUE | ORDINAL_POSITION | +----------------+-----------------+------------+------------------+ | 2097154 | _client_name | libmysql | 0 | -| 2097154 | _client_version | 8.3.0 | 1 | +| 2097154 | _client_version | 8.4.0 | 1 | | 2097154 | _os | Linux | 2 | | 2097154 | _pid | 1299203 | 3 | | 2097154 | _platform | x86_64 | 4 | diff --git a/post-installation-check.md b/post-installation-check.md index f70aa09d95cb2..e8e87b2ab0202 100644 --- a/post-installation-check.md +++ b/post-installation-check.md @@ -63,7 +63,7 @@ The following information indicates successful login: ```sql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 -Server version: 8.0.11-TiDB-v8.3.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible +Server version: 8.0.11-TiDB-v8.4.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible Copyright (c) 2000, 2023, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md index 20176ea7371ca..90977faf3a34a 100644 --- a/production-deployment-using-tiup.md +++ b/production-deployment-using-tiup.md @@ -95,7 +95,7 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g > **Tip:** > -> `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.3.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.3.0-linux-amd64.tar.gz`. +> `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.4.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.4.0-linux-amd64.tar.gz`. **Method 2**: Manually pack an offline component package using `tiup mirror clone`. The detailed steps are as follows: @@ -345,13 +345,13 @@ Before you run the `deploy` command, use the `check` and `check --apply` command {{< copyable "shell-regular" >}} ```shell - tiup cluster deploy tidb-test v8.3.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] + tiup cluster deploy tidb-test v8.4.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] ``` In the `tiup cluster deploy` command above: - `tidb-test` is the name of the TiDB cluster to be deployed. -- `v8.3.0` is the version of the TiDB cluster to be deployed. You can see the latest supported versions by running `tiup list tidb`. +- `v8.4.0` is the version of the TiDB cluster to be deployed. You can see the latest supported versions by running `tiup list tidb`. - `topology.yaml` is the initialization configuration file. - `--user root` indicates logging into the target machine as the `root` user to complete the cluster deployment. The `root` user is expected to have `ssh` and `sudo` privileges to the target machine. Alternatively, you can use other users with `ssh` and `sudo` privileges to complete the deployment. - `[-i]` and `[-p]` are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the root user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively. diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index 14d17ce0183ba..8c015c76f99f6 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -81,10 +81,10 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in {{< copyable "shell-regular" >}} ```shell - tiup playground v8.3.0 --db 2 --pd 3 --kv 3 + tiup playground v8.4.0 --db 2 --pd 3 --kv 3 ``` - The command downloads a version cluster to the local machine and starts it, such as v8.3.0. To view the latest version, run `tiup list tidb`. + The command downloads a version cluster to the local machine and starts it, such as v8.4.0. To view the latest version, run `tiup list tidb`. This command returns the access methods of the cluster: @@ -202,10 +202,10 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in {{< copyable "shell-regular" >}} ```shell - tiup playground v8.3.0 --db 2 --pd 3 --kv 3 + tiup playground v8.4.0 --db 2 --pd 3 --kv 3 ``` - The command downloads a version cluster to the local machine and starts it, such as v8.3.0. To view the latest version, run `tiup list tidb`. + The command downloads a version cluster to the local machine and starts it, such as v8.4.0. To view the latest version, run `tiup list tidb`. This command returns the access methods of the cluster: @@ -437,7 +437,7 @@ Other requirements for the target machine include: ``` - ``: Set the cluster name - - ``: Set the TiDB cluster version, such as `v8.3.0`. You can see all the supported TiDB versions by running the `tiup list tidb` command + - ``: Set the TiDB cluster version, such as `v8.4.0`. You can see all the supported TiDB versions by running the `tiup list tidb` command - `-p`: Specify the password used to connect to the target machine. > **Note:** diff --git a/scale-microservices-using-tiup.md b/scale-microservices-using-tiup.md index e11c71d455456..f7ab4765f0b5a 100644 --- a/scale-microservices-using-tiup.md +++ b/scale-microservices-using-tiup.md @@ -129,7 +129,7 @@ Starting /root/.tiup/components/cluster/v1.16/cluster display TiDB Cluster: -TiDB Version: v8.3.0 +TiDB Version: v8.4.0 ID Role Host Ports Status Data Dir Deploy Dir diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index a3f4d185b623b..464e14dfe2583 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -298,7 +298,7 @@ This section exemplifies how to remove a TiKV node from the `10.0.1.5` host. ``` Starting /root/.tiup/components/cluster/v1.12.3/cluster display TiDB Cluster: - TiDB Version: v8.3.0 + TiDB Version: v8.4.0 ID Role Host Ports Status Data Dir Deploy Dir -- ---- ---- ----- ------ -------- ---------- 10.0.1.3:8300 cdc 10.0.1.3 8300 Up data/cdc-8300 deploy/cdc-8300 diff --git a/system-variables.md b/system-variables.md index 43cc6de662636..fd3790f95ad7b 100644 --- a/system-variables.md +++ b/system-variables.md @@ -6458,7 +6458,7 @@ Internally, the TiDB parser transforms the `SET TRANSACTION ISOLATION LEVEL [REA - Scope: NONE - Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): No - Default value: `8.0.11-TiDB-`(tidb version) -- This variable returns the MySQL version, followed by the TiDB version. For example '8.0.11-TiDB-v8.3.0'. +- This variable returns the MySQL version, followed by the TiDB version. For example '8.0.11-TiDB-v8.4.0'. ### version_comment diff --git a/ticdc/deploy-ticdc.md b/ticdc/deploy-ticdc.md index ed46e9bbfc38f..254b5c070be12 100644 --- a/ticdc/deploy-ticdc.md +++ b/ticdc/deploy-ticdc.md @@ -95,7 +95,7 @@ tiup cluster upgrade --transfer-timeout 600 > **Note:** > -> In the preceding command, you need to replace `` and `` with the actual cluster name and cluster version. For example, the version can be v8.3.0. +> In the preceding command, you need to replace `` and `` with the actual cluster name and cluster version. For example, the version can be v8.4.0. ### Upgrade cautions @@ -150,7 +150,7 @@ See [Enable TLS Between TiDB Components](/enable-tls-between-components.md). ## View TiCDC status using the command-line tool -Run the following command to view the TiCDC cluster status. Note that you need to replace `v` with the TiCDC cluster version, such as `v8.3.0`: +Run the following command to view the TiCDC cluster status. Note that you need to replace `v` with the TiCDC cluster version, such as `v8.4.0`: ```shell tiup cdc:v cli capture list --server=http://10.0.10.25:8300 diff --git a/ticdc/ticdc-changefeed-config.md b/ticdc/ticdc-changefeed-config.md index 7859499cbe7e2..dfeb01eef6bb5 100644 --- a/ticdc/ticdc-changefeed-config.md +++ b/ticdc/ticdc-changefeed-config.md @@ -16,7 +16,7 @@ cdc cli changefeed create --server=http://10.0.10.25:8300 --sink-uri="mysql://ro ```shell Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-08-22T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.3.0"} +Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-10-30T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` - `--changefeed-id`: The ID of the replication task. The format must match the `^[a-zA-Z0-9]+(\-[a-zA-Z0-9]+)*$` regular expression. If this ID is not specified, TiCDC automatically generates a UUID (the version 4 format) as the ID. diff --git a/ticdc/ticdc-changefeed-overview.md b/ticdc/ticdc-changefeed-overview.md index 5164f0e3c70b6..5bc42408548e4 100644 --- a/ticdc/ticdc-changefeed-overview.md +++ b/ticdc/ticdc-changefeed-overview.md @@ -44,4 +44,4 @@ You can manage a TiCDC cluster and its replication tasks using the command-line You can also use the HTTP interface (the TiCDC OpenAPI feature) to manage a TiCDC cluster and its replication tasks. For details, see [TiCDC OpenAPI](/ticdc/ticdc-open-api.md). -If your TiCDC is deployed using TiUP, you can start `cdc cli` by running the `tiup cdc:v cli` command. Replace `v` with the TiCDC cluster version, such as `v8.3.0`. You can also run `cdc cli` directly. +If your TiCDC is deployed using TiUP, you can start `cdc cli` by running the `tiup cdc:v cli` command. Replace `v` with the TiCDC cluster version, such as `v8.4.0`. You can also run `cdc cli` directly. diff --git a/ticdc/ticdc-manage-changefeed.md b/ticdc/ticdc-manage-changefeed.md index 560ca5b5457da..e6a369b854839 100644 --- a/ticdc/ticdc-manage-changefeed.md +++ b/ticdc/ticdc-manage-changefeed.md @@ -19,7 +19,7 @@ cdc cli changefeed create --server=http://10.0.10.25:8300 --sink-uri="mysql://ro ```shell Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-08-22T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.3.0"} +Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-10-30T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` ## Query the replication task list diff --git a/ticdc/ticdc-open-api-v2.md b/ticdc/ticdc-open-api-v2.md index b6a1b24597d2f..698635d2f6c35 100644 --- a/ticdc/ticdc-open-api-v2.md +++ b/ticdc/ticdc-open-api-v2.md @@ -92,7 +92,7 @@ curl -X GET http://127.0.0.1:8300/api/v2/status ```json { - "version": "v8.3.0", + "version": "v8.4.0", "git_hash": "10413bded1bdb2850aa6d7b94eb375102e9c44dc", "id": "d2912e63-3349-447c-90ba-72a4e04b5e9e", "pid": 1447, diff --git a/ticdc/ticdc-sink-to-cloud-storage.md b/ticdc/ticdc-sink-to-cloud-storage.md index 627e47646f752..514470331bcd1 100644 --- a/ticdc/ticdc-sink-to-cloud-storage.md +++ b/ticdc/ticdc-sink-to-cloud-storage.md @@ -24,7 +24,7 @@ cdc cli changefeed create \ The output is as follows: ```shell -Info: {"upstream_id":7171388873935111376,"namespace":"default","id":"simple-replication-task","sink_uri":"s3://logbucket/storage_test?protocol=canal-json","create_time":"2024-08-22T18:52:05.566016967+08:00","start_ts":437706850431664129,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.3.0"} +Info: {"upstream_id":7171388873935111376,"namespace":"default","id":"simple-replication-task","sink_uri":"s3://logbucket/storage_test?protocol=canal-json","create_time":"2024-10-30T18:52:05.566016967+08:00","start_ts":437706850431664129,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` - `--server`: The address of any TiCDC server in the TiCDC cluster. diff --git a/ticdc/ticdc-sink-to-pulsar.md b/ticdc/ticdc-sink-to-pulsar.md index caee06c3825cd..e926aec8d0e51 100644 --- a/ticdc/ticdc-sink-to-pulsar.md +++ b/ticdc/ticdc-sink-to-pulsar.md @@ -23,7 +23,7 @@ cdc cli changefeed create \ Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-08-22T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.3.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-08-22 14:42:31.410"} +Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-10-30T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.4.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-10-30 14:42:31.410"} ``` The meaning of each parameter is as follows: diff --git a/tidb-monitoring-api.md b/tidb-monitoring-api.md index 886738c464586..5c72d821c0cc7 100644 --- a/tidb-monitoring-api.md +++ b/tidb-monitoring-api.md @@ -28,7 +28,7 @@ The following example uses `http://${host}:${port}/status` to get the current st curl http://127.0.0.1:10080/status { connections: 0, # The current number of clients connected to the TiDB server. - version: "8.0.11-TiDB-v8.3.0", # The TiDB version number. + version: "8.0.11-TiDB-v8.4.0", # The TiDB version number. git_hash: "778c3f4a5a716880bcd1d71b257c8165685f0d70" # The Git Hash of the current TiDB code. } ``` diff --git a/tiflash/create-tiflash-replicas.md b/tiflash/create-tiflash-replicas.md index a905bd1f81074..14a5cdb5f554e 100644 --- a/tiflash/create-tiflash-replicas.md +++ b/tiflash/create-tiflash-replicas.md @@ -147,10 +147,10 @@ Before TiFlash replicas are added, each TiKV instance performs a full table scan tiup ctl:v pd -u http://:2379 store limit all engine tiflash 60 add-peer ``` - > In the preceding command, you need to replace `v` with the actual cluster version, such as `v8.3.0` and `:2379` with the address of any PD node. For example: + > In the preceding command, you need to replace `v` with the actual cluster version, such as `v8.4.0` and `:2379` with the address of any PD node. For example: > > ```shell - > tiup ctl:v8.3.0 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peer + > tiup ctl:v8.4.0 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peer > ``` Within a few minutes, you will observe a significant increase in CPU and disk IO resource usage of the TiFlash nodes, and TiFlash should create replicas faster. At the same time, the TiKV nodes' CPU and disk IO resource usage increases as well. diff --git a/tiup/tiup-cluster.md b/tiup/tiup-cluster.md index bb9897ea8275a..2740e651bc18e 100644 --- a/tiup/tiup-cluster.md +++ b/tiup/tiup-cluster.md @@ -62,7 +62,7 @@ To deploy the cluster, run the `tiup cluster deploy` command. The usage of the c tiup cluster deploy [flags] ``` -This command requires you to provide the cluster name, the TiDB cluster version (such as `v8.3.0`), and a topology file of the cluster. +This command requires you to provide the cluster name, the TiDB cluster version (such as `v8.4.0`), and a topology file of the cluster. To write a topology file, refer to [the example](https://github.com/pingcap/tiup/blob/master/embed/examples/cluster/topology.example.yaml). The following file is an example of the simplest topology: @@ -122,12 +122,12 @@ tidb_servers: ... ``` -Save the file as `/tmp/topology.yaml`. If you want to use TiDB v8.3.0 and your cluster name is `prod-cluster`, run the following command: +Save the file as `/tmp/topology.yaml`. If you want to use TiDB v8.4.0 and your cluster name is `prod-cluster`, run the following command: {{< copyable "shell-regular" >}} ```shell -tiup cluster deploy -p prod-cluster v8.3.0 /tmp/topology.yaml +tiup cluster deploy -p prod-cluster v8.4.0 /tmp/topology.yaml ``` During the execution, TiUP asks you to confirm your topology again and requires the root password of the target machine (the `-p` flag means inputting password): @@ -135,7 +135,7 @@ During the execution, TiUP asks you to confirm your topology again and requires ```bash Please confirm your topology: TiDB Cluster: prod-cluster -TiDB Version: v8.3.0 +TiDB Version: v8.4.0 Type Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 172.16.5.134 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379 @@ -179,7 +179,7 @@ tiup cluster list Starting /root/.tiup/components/cluster/v1.12.3/cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- -prod-cluster tidb v8.3.0 /root/.tiup/storage/cluster/clusters/prod-cluster /root/.tiup/storage/cluster/clusters/prod-cluster/ssh/id_rsa +prod-cluster tidb v8.4.0 /root/.tiup/storage/cluster/clusters/prod-cluster /root/.tiup/storage/cluster/clusters/prod-cluster/ssh/id_rsa ``` ## Start the cluster @@ -209,7 +209,7 @@ tiup cluster display prod-cluster ``` Starting /root/.tiup/components/cluster/v1.12.3/cluster display prod-cluster TiDB Cluster: prod-cluster -TiDB Version: v8.3.0 +TiDB Version: v8.4.0 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000 @@ -284,7 +284,7 @@ tiup cluster display prod-cluster ``` Starting /root/.tiup/components/cluster/v1.12.3/cluster display prod-cluster TiDB Cluster: prod-cluster -TiDB Version: v8.3.0 +TiDB Version: v8.4.0 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000 @@ -396,12 +396,12 @@ Global Flags: -y, --yes Skip all confirmations and assumes 'yes' ``` -For example, the following command upgrades the cluster to v8.3.0: +For example, the following command upgrades the cluster to v8.4.0: {{< copyable "shell-regular" >}} ```bash -tiup cluster upgrade tidb-test v8.3.0 +tiup cluster upgrade tidb-test v8.4.0 ``` ## Update configuration @@ -586,11 +586,11 @@ tiup cluster audit Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.3/cluster audit ID Time Command -- ---- ------- -4BLhr0 2024-08-22T23:55:09+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.3.0 /tmp/topology.yaml -4BKWjF 2024-08-22T23:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.3.0 /tmp/topology.yaml -4BKVwH 2024-08-22T23:02:08+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.3.0 /tmp/topology.yaml -4BKKH1 2024-08-22T16:39:04+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster destroy test -4BKKDx 2024-08-22T16:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.3.0 /tmp/topology.yaml +4BLhr0 2024-10-30T23:55:09+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKWjF 2024-10-30T23:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKVwH 2024-10-30T23:02:08+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKKH1 2024-10-30T16:39:04+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster destroy test +4BKKDx 2024-10-30T16:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml ``` The first column is `audit-id`. To view the execution log of a certain command, pass the `audit-id` of a command as the flag as follows: @@ -705,7 +705,7 @@ All operations above performed on the cluster machine use the SSH client embedde Then you can use the `--ssh=system` command-line flag to enable the system-native command-line tool: -- Deploy a cluster: `tiup cluster deploy --ssh=system`. Fill in the name of your cluster for ``, the TiDB version to be deployed (such as `v8.3.0`) for ``, and the topology file for ``. +- Deploy a cluster: `tiup cluster deploy --ssh=system`. Fill in the name of your cluster for ``, the TiDB version to be deployed (such as `v8.4.0`) for ``, and the topology file for ``. - Start a cluster: `tiup cluster start --ssh=system` - Upgrade a cluster: `tiup cluster upgrade ... --ssh=system` diff --git a/tiup/tiup-component-cluster-deploy.md b/tiup/tiup-component-cluster-deploy.md index 5ec0b0e2e2a0a..4dbde25cd4bc3 100644 --- a/tiup/tiup-component-cluster-deploy.md +++ b/tiup/tiup-component-cluster-deploy.md @@ -14,7 +14,7 @@ tiup cluster deploy [flags] ``` - ``: the name of the new cluster, which cannot be the same as the existing cluster names. -- ``: the version number of the TiDB cluster to deploy, such as `v8.3.0`. +- ``: the version number of the TiDB cluster to deploy, such as `v8.4.0`. - ``: the prepared [topology file](/tiup/tiup-cluster-topology-reference.md). ## Options diff --git a/tiup/tiup-component-cluster-patch.md b/tiup/tiup-component-cluster-patch.md index 7c5b079608987..4b7e4f1755ee9 100644 --- a/tiup/tiup-component-cluster-patch.md +++ b/tiup/tiup-component-cluster-patch.md @@ -29,7 +29,7 @@ Before running the `tiup cluster patch` command, you need to pack the binary pac 1. Determine the following variables: - `${component}`: the name of the component to be replaced (such as `tidb`, `tikv`, or `pd`). - - `${version}`: the version of the component (such as `v8.3.0` or `v7.5.3`). + - `${version}`: the version of the component (such as `v8.4.0` or `v7.5.4`). - `${os}`: the operating system (`linux`). - `${arch}`: the platform on which the component runs (`amd64`, `arm64`). diff --git a/tiup/tiup-component-cluster-upgrade.md b/tiup/tiup-component-cluster-upgrade.md index ffcab8d27ebff..90ff11c0a30d4 100644 --- a/tiup/tiup-component-cluster-upgrade.md +++ b/tiup/tiup-component-cluster-upgrade.md @@ -14,7 +14,7 @@ tiup cluster upgrade [flags] ``` - ``: the cluster name to operate on. If you forget the cluster name, you can check it with the [cluster list](/tiup/tiup-component-cluster-list.md) command. -- ``: the target version to upgrade to, such as `v8.3.0`. Currently, it is only allowed to upgrade to a version higher than the current cluster, that is, no downgrade is allowed. It is also not allowed to upgrade to the nightly version. +- ``: the target version to upgrade to, such as `v8.4.0`. Currently, it is only allowed to upgrade to a version higher than the current cluster, that is, no downgrade is allowed. It is also not allowed to upgrade to the nightly version. ## Options diff --git a/tiup/tiup-component-dm-upgrade.md b/tiup/tiup-component-dm-upgrade.md index 44b695eaad13d..b15df2b2bd2e6 100644 --- a/tiup/tiup-component-dm-upgrade.md +++ b/tiup/tiup-component-dm-upgrade.md @@ -14,7 +14,7 @@ tiup dm upgrade [flags] ``` - `` is the name of the cluster to be operated on. If you forget the cluster name, you can check it using the [`tiup dm list`](/tiup/tiup-component-dm-list.md) command. -- `` is the target version to be upgraded to, such as `v8.3.0`. Currently, only upgrading to a later version is allowed, and upgrading to an earlier version is not allowed, which means the downgrade is not allowed. Upgrading to a nightly version is not allowed either. +- `` is the target version to be upgraded to, such as `v8.4.0`. Currently, only upgrading to a later version is allowed, and upgrading to an earlier version is not allowed, which means the downgrade is not allowed. Upgrading to a nightly version is not allowed either. ## Options diff --git a/tiup/tiup-component-management.md b/tiup/tiup-component-management.md index 0cf5630d46b48..32b86d5289537 100644 --- a/tiup/tiup-component-management.md +++ b/tiup/tiup-component-management.md @@ -72,12 +72,12 @@ Example 2: Use TiUP to install the nightly version of TiDB. tiup install tidb:nightly ``` -Example 3: Use TiUP to install TiKV v8.3.0. +Example 3: Use TiUP to install TiKV v8.4.0. {{< copyable "shell-regular" >}} ```shell -tiup install tikv:v8.3.0 +tiup install tikv:v8.4.0 ``` ## Upgrade components @@ -130,12 +130,12 @@ Before the component is started, TiUP creates a directory for it, and then puts If you want to start the same component multiple times and reuse the previous working directory, you can use `--tag` to specify the same name when the component is started. After the tag is specified, the working directory will *not be automatically deleted* when the instance is terminated, which makes it convenient to reuse the working directory. -Example 1: Operate TiDB v8.3.0. +Example 1: Operate TiDB v8.4.0. {{< copyable "shell-regular" >}} ```shell -tiup tidb:v8.3.0 +tiup tidb:v8.4.0 ``` Example 2: Specify the tag with which TiKV operates. @@ -221,12 +221,12 @@ The following flags are supported in this command: - If the version is ignored, adding `--all` means to uninstall all versions of this component. - If the version and the component are both ignored, adding `--all` means to uninstall all components of all versions. -Example 1: Uninstall TiDB v8.3.0. +Example 1: Uninstall TiDB v8.4.0. {{< copyable "shell-regular" >}} ```shell -tiup uninstall tidb:v8.3.0 +tiup uninstall tidb:v8.4.0 ``` Example 2: Uninstall TiKV of all versions. diff --git a/tiup/tiup-mirror.md b/tiup/tiup-mirror.md index a291d9d3e01d1..4cd28c0ed42f9 100644 --- a/tiup/tiup-mirror.md +++ b/tiup/tiup-mirror.md @@ -87,9 +87,9 @@ The `tiup mirror clone` command provides many optional flags (might provide more If you want to clone only one version (not all versions) of a component, use `--=` to specify this version. For example: - - Execute the `tiup mirror clone --tidb v8.3.0` command to clone the v8.3.0 version of the TiDB component. - - Run the `tiup mirror clone --tidb v8.3.0 --tikv all` command to clone the v8.3.0 version of the TiDB component and all versions of the TiKV component. - - Run the `tiup mirror clone v8.3.0` command to clone the v8.3.0 version of all components in a cluster. + - Execute the `tiup mirror clone --tidb v8.4.0` command to clone the v8.4.0 version of the TiDB component. + - Run the `tiup mirror clone --tidb v8.4.0 --tikv all` command to clone the v8.4.0 version of the TiDB component and all versions of the TiKV component. + - Run the `tiup mirror clone v8.4.0` command to clone the v8.4.0 version of all components in a cluster. After cloning, signing keys are set up automatically. diff --git a/tiup/tiup-playground.md b/tiup/tiup-playground.md index d3d7c6a9fe7d1..09d9c4ca2c69e 100644 --- a/tiup/tiup-playground.md +++ b/tiup/tiup-playground.md @@ -22,7 +22,7 @@ This command actually performs the following operations: - Because this command does not specify the version of the playground component, TiUP first checks the latest version of the installed playground component. Assume that the latest version is v1.12.3, then this command works the same as `tiup playground:v1.12.3`. - If you have not used TiUP playground to install the TiDB, TiKV, and PD components, the playground component installs the latest stable version of these components, and then start these instances. -- Because this command does not specify the version of the TiDB, PD, and TiKV component, TiUP playground uses the latest version of each component by default. Assume that the latest version is v8.3.0, then this command works the same as `tiup playground:v1.12.3 v8.3.0`. +- Because this command does not specify the version of the TiDB, PD, and TiKV component, TiUP playground uses the latest version of each component by default. Assume that the latest version is v8.4.0, then this command works the same as `tiup playground:v1.12.3 v8.4.0`. - Because this command does not specify the number of each component, TiUP playground, by default, starts a smallest cluster that consists of one TiDB instance, one TiKV instance, one PD instance, and one TiFlash instance. - After starting each TiDB component, TiUP playground reminds you that the cluster is successfully started and provides you some useful information, such as how to connect to the TiDB cluster through the MySQL client and how to access the [TiDB Dashboard](/dashboard/dashboard-intro.md). @@ -179,7 +179,7 @@ tiup playground scale-in --pid 86526 Starting from v8.2.0, [PD microservice mode](/pd-microservices.md) (experimental) can be deployed using TiUP. You can deploy the `tso` microservice and `scheduling` microservice for your cluster using TiUP Playground as follows: ```shell -tiup playground v8.3.0 --pd.mode ms --pd 3 --tso 2 --scheduling 2 +tiup playground v8.4.0 --pd.mode ms --pd 3 --tso 2 --scheduling 2 ``` - `--pd.mode`: setting it to `ms` means enabling the microservice mode for PD. diff --git a/upgrade-monitoring-services.md b/upgrade-monitoring-services.md index 5df8aafe07962..8610c917c7c4d 100644 --- a/upgrade-monitoring-services.md +++ b/upgrade-monitoring-services.md @@ -36,7 +36,7 @@ Download a new installation package from the [Prometheus download page](https:// > **Tip:** > - > `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.3.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.3.0-linux-amd64.tar.gz`. + > `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.4.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.4.0-linux-amd64.tar.gz`. 2. In the extracted files, locate `prometheus-v{version}-linux-amd64.tar.gz` and extract it. @@ -85,7 +85,7 @@ In the following upgrade steps, you need to download the Grafana installation pa > **Tip:** > - > `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.3.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.3.0-linux-amd64.tar.gz`. + > `{version}` in the link indicates the version number of TiDB and `{arch}` indicates the architecture of the system, which can be `amd64` or `arm64`. For example, the download link for `v8.4.0` in the `amd64` architecture is `https://download.pingcap.org/tidb-community-toolkit-v8.4.0-linux-amd64.tar.gz`. 2. In the extracted files, locate `grafana-v{version}-linux-amd64.tar.gz` and extract it. diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index cb47c50a7e2b4..f887db856b792 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -8,11 +8,11 @@ aliases: ['/docs/dev/upgrade-tidb-using-tiup/','/docs/dev/how-to/upgrade/using-t This document is targeted for the following upgrade paths: -- Upgrade from TiDB 4.0 versions to TiDB 8.3. -- Upgrade from TiDB 5.0-5.4 versions to TiDB 8.3. -- Upgrade from TiDB 6.0-6.6 to TiDB 8.3. -- Upgrade from TiDB 7.0-7.6 to TiDB 8.3. -- Upgrade from TiDB 8.0-8.2 to TiDB 8.3. +- Upgrade from TiDB 4.0 versions to TiDB 8.4. +- Upgrade from TiDB 5.0-5.4 versions to TiDB 8.4. +- Upgrade from TiDB 6.0-6.6 to TiDB 8.4. +- Upgrade from TiDB 7.0-7.6 to TiDB 8.4. +- Upgrade from TiDB 8.0-8.3 to TiDB 8.4. > **Warning:** > @@ -25,7 +25,7 @@ This document is targeted for the following upgrade paths: > **Note:** > -> - If your cluster to be upgraded is v3.1 or an earlier version (v3.0 or v2.1), the direct upgrade to v8.3.0 is not supported. You need to upgrade your cluster first to v4.0 and then to v8.3.0. +> - If your cluster to be upgraded is v3.1 or an earlier version (v3.0 or v2.1), the direct upgrade to v8.4.0 is not supported. You need to upgrade your cluster first to v4.0 and then to v8.4.0. > - If your cluster to be upgraded is earlier than v6.2, the upgrade might get stuck when you upgrade the cluster to v6.2 or later versions in some scenarios. You can refer to [How to fix the issue](#how-to-fix-the-issue-that-the-upgrade-gets-stuck-when-upgrading-to-v620-or-later-versions). > - TiDB nodes use the value of the [`server-version`](/tidb-configuration-file.md#server-version) configuration item to verify the current TiDB version. Therefore, to avoid unexpected behaviors, before upgrading the TiDB cluster, you need to set the value of `server-version` to empty or the real version of the current TiDB cluster. > - Setting the [`performance.force-init-stats`](/tidb-configuration-file.md#force-init-stats-new-in-v657-and-v710) configuration item to `ON` prolongs the TiDB startup time, which might cause startup timeouts and upgrade failures. To avoid this issue, it is recommended to set a longer waiting timeout for TiUP. @@ -57,12 +57,12 @@ This document is targeted for the following upgrade paths: ## Upgrade caveat - TiDB currently does not support version downgrade or rolling back to an earlier version after the upgrade. -- For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (`tiup cluster`) for new management according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). Then you can upgrade the cluster to v8.3.0 according to this document. -- To update versions earlier than v3.0 to v8.3.0: +- For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (`tiup cluster`) for new management according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). Then you can upgrade the cluster to v8.4.0 according to this document. +- To update versions earlier than v3.0 to v8.4.0: 1. Update this version to 3.0 using [TiDB Ansible](https://docs.pingcap.com/tidb/v3.0/upgrade-tidb-using-ansible). 2. Use TiUP (`tiup cluster`) to import the TiDB Ansible configuration. 3. Update the 3.0 version to 4.0 according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). - 4. Upgrade the cluster to v8.3.0 according to this document. + 4. Upgrade the cluster to v8.4.0 according to this document. - Support upgrading the versions of TiCDC, TiFlash, and other components. - When upgrading TiFlash from versions earlier than v6.3.0 to v6.3.0 and later versions, note that the CPU must support the AVX2 instruction set under the Linux AMD64 architecture and the ARMv8 instruction set architecture under the Linux ARM64 architecture. For details, see the description in [v6.3.0 Release Notes](/releases/release-6.3.0.md#others). - For detailed compatibility changes of different versions, see the [Release Notes](/releases/release-notes.md) of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes. @@ -74,7 +74,7 @@ This section introduces the preparation works needed before upgrading your TiDB ### Step 1: Review compatibility changes -Review [the compatibility changes](/releases/release-8.3.0.md#compatibility-changes) in TiDB v8.3.0 release notes. If any changes affect your upgrade, take actions accordingly. +Review [the compatibility changes](/releases/release-8.4.0.md#compatibility-changes) in TiDB v8.4.0 release notes. If any changes affect your upgrade, take actions accordingly. ### Step 2: Upgrade TiUP or TiUP offline mirror @@ -149,7 +149,7 @@ Now, the offline mirror has been upgraded successfully. If an error occurs durin > Skip this step if one of the following situations applies: > > + You have not modified the configuration parameters of the original cluster. Or you have modified the configuration parameters using `tiup cluster` but no more modification is needed. -> + After the upgrade, you want to use v8.3.0's default parameter values for the unmodified configuration items. +> + After the upgrade, you want to use v8.4.0's default parameter values for the unmodified configuration items. 1. Enter the `vi` editing mode to edit the topology file: @@ -165,7 +165,7 @@ Now, the offline mirror has been upgraded successfully. If an error occurs durin > **Note:** > -> Before you upgrade the cluster to v6.6.0, make sure that the parameters you have modified in v4.0 are compatible in v8.3.0. For details, see [TiKV Configuration File](/tikv-configuration-file.md). +> Before you upgrade the cluster to v6.6.0, make sure that the parameters you have modified in v4.0 are compatible in v8.4.0. For details, see [TiKV Configuration File](/tikv-configuration-file.md). ### Step 4: Check the DDL and backup status of the cluster @@ -213,12 +213,12 @@ If your application has a maintenance window for the database to be stopped for tiup cluster upgrade ``` -For example, if you want to upgrade the cluster to v8.3.0: +For example, if you want to upgrade the cluster to v8.4.0: {{< copyable "shell-regular" >}} ```shell -tiup cluster upgrade v8.3.0 +tiup cluster upgrade v8.4.0 ``` > **Note:** @@ -264,7 +264,7 @@ tiup cluster upgrade -h | grep "version" tiup cluster stop ``` -2. Use the `upgrade` command with the `--offline` option to perform the offline upgrade. Fill in the name of your cluster for `` and the version to upgrade to for ``, such as `v8.3.0`. +2. Use the `upgrade` command with the `--offline` option to perform the offline upgrade. Fill in the name of your cluster for `` and the version to upgrade to for ``, such as `v8.4.0`. {{< copyable "shell-regular" >}} @@ -293,7 +293,7 @@ tiup cluster display ``` Cluster type: tidb Cluster name: -Cluster version: v8.3.0 +Cluster version: v8.4.0 ``` ## FAQ @@ -348,7 +348,7 @@ Starting from v6.2.0, TiDB enables the [concurrent DDL framework](/ddl-introduct ### The evict leader has waited too long during the upgrade. How to skip this step for a quick upgrade? -You can specify `--force`. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the upgrade. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. In the following command, `` is the version to upgrade to, such as `v8.3.0`. +You can specify `--force`. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the upgrade. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. In the following command, `` is the version to upgrade to, such as `v8.4.0`. {{< copyable "shell-regular" >}} @@ -363,5 +363,5 @@ You can upgrade the tool version by using TiUP to install the `ctl` component of {{< copyable "shell-regular" >}} ```shell -tiup install ctl:v8.3.0 +tiup install ctl:v8.4.0 ``` From 6aba6034552b5d6d146d782b733fae9ba333fd5e Mon Sep 17 00:00:00 2001 From: Aolin Date: Wed, 30 Oct 2024 16:04:08 +0800 Subject: [PATCH 2/8] update release date Signed-off-by: Aolin --- ticdc/ticdc-changefeed-config.md | 2 +- ticdc/ticdc-manage-changefeed.md | 2 +- ticdc/ticdc-sink-to-cloud-storage.md | 2 +- ticdc/ticdc-sink-to-pulsar.md | 2 +- tiup/tiup-cluster.md | 10 +++++----- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/ticdc/ticdc-changefeed-config.md b/ticdc/ticdc-changefeed-config.md index dfeb01eef6bb5..530b86f3c4525 100644 --- a/ticdc/ticdc-changefeed-config.md +++ b/ticdc/ticdc-changefeed-config.md @@ -16,7 +16,7 @@ cdc cli changefeed create --server=http://10.0.10.25:8300 --sink-uri="mysql://ro ```shell Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-10-30T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} +Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-11-11T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` - `--changefeed-id`: The ID of the replication task. The format must match the `^[a-zA-Z0-9]+(\-[a-zA-Z0-9]+)*$` regular expression. If this ID is not specified, TiCDC automatically generates a UUID (the version 4 format) as the ID. diff --git a/ticdc/ticdc-manage-changefeed.md b/ticdc/ticdc-manage-changefeed.md index e6a369b854839..4668a5c4df497 100644 --- a/ticdc/ticdc-manage-changefeed.md +++ b/ticdc/ticdc-manage-changefeed.md @@ -19,7 +19,7 @@ cdc cli changefeed create --server=http://10.0.10.25:8300 --sink-uri="mysql://ro ```shell Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-10-30T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} +Info: {"upstream_id":7178706266519722477,"namespace":"default","id":"simple-replication-task","sink_uri":"mysql://root:xxxxx@127.0.0.1:4000/?time-zone=","create_time":"2024-11-11T15:05:46.679218+08:00","start_ts":438156275634929669,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":true,"bdr_mode":false,"sync_point_interval":30000000000,"sync_point_retention":3600000000000,"filter":{"rules":["test.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` ## Query the replication task list diff --git a/ticdc/ticdc-sink-to-cloud-storage.md b/ticdc/ticdc-sink-to-cloud-storage.md index 514470331bcd1..93101cfcbfbb4 100644 --- a/ticdc/ticdc-sink-to-cloud-storage.md +++ b/ticdc/ticdc-sink-to-cloud-storage.md @@ -24,7 +24,7 @@ cdc cli changefeed create \ The output is as follows: ```shell -Info: {"upstream_id":7171388873935111376,"namespace":"default","id":"simple-replication-task","sink_uri":"s3://logbucket/storage_test?protocol=canal-json","create_time":"2024-10-30T18:52:05.566016967+08:00","start_ts":437706850431664129,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} +Info: {"upstream_id":7171388873935111376,"namespace":"default","id":"simple-replication-task","sink_uri":"s3://logbucket/storage_test?protocol=canal-json","create_time":"2024-11-11T18:52:05.566016967+08:00","start_ts":437706850431664129,"engine":"unified","config":{"case_sensitive":false,"enable_old_value":true,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false},"column_selectors":null,"transaction_atomicity":"none","encoder_concurrency":16,"terminator":"\r\n","date_separator":"none","enable_partition_separator":false},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":""}},"state":"normal","creator_version":"v8.4.0"} ``` - `--server`: The address of any TiCDC server in the TiCDC cluster. diff --git a/ticdc/ticdc-sink-to-pulsar.md b/ticdc/ticdc-sink-to-pulsar.md index e926aec8d0e51..7c5f4d3f4ce1d 100644 --- a/ticdc/ticdc-sink-to-pulsar.md +++ b/ticdc/ticdc-sink-to-pulsar.md @@ -23,7 +23,7 @@ cdc cli changefeed create \ Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-10-30T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.4.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-10-30 14:42:31.410"} +Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-11-11T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.4.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-10-30 14:42:31.410"} ``` The meaning of each parameter is as follows: diff --git a/tiup/tiup-cluster.md b/tiup/tiup-cluster.md index 2740e651bc18e..f3babcb166edf 100644 --- a/tiup/tiup-cluster.md +++ b/tiup/tiup-cluster.md @@ -586,11 +586,11 @@ tiup cluster audit Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.3/cluster audit ID Time Command -- ---- ------- -4BLhr0 2024-10-30T23:55:09+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml -4BKWjF 2024-10-30T23:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml -4BKVwH 2024-10-30T23:02:08+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml -4BKKH1 2024-10-30T16:39:04+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster destroy test -4BKKDx 2024-10-30T16:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BLhr0 2024-11-11T23:55:09+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKWjF 2024-11-11T23:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKVwH 2024-11-11T23:02:08+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml +4BKKH1 2024-11-11T16:39:04+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster destroy test +4BKKDx 2024-11-11T16:36:57+08:00 /home/tidb/.tiup/components/cluster/v1.12.3/cluster deploy test v8.4.0 /tmp/topology.yaml ``` The first column is `audit-id`. To view the execution log of a certain command, pass the `audit-id` of a command as the flag as follows: From 7ac5f9c4e4c91189c370e82112b9e963e380c9fa Mon Sep 17 00:00:00 2001 From: Aolin Date: Wed, 30 Oct 2024 16:11:27 +0800 Subject: [PATCH 3/8] update release date Signed-off-by: Aolin --- ticdc/ticdc-sink-to-pulsar.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ticdc/ticdc-sink-to-pulsar.md b/ticdc/ticdc-sink-to-pulsar.md index 7c5f4d3f4ce1d..696833b53dd71 100644 --- a/ticdc/ticdc-sink-to-pulsar.md +++ b/ticdc/ticdc-sink-to-pulsar.md @@ -23,7 +23,7 @@ cdc cli changefeed create \ Create changefeed successfully! ID: simple-replication-task -Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-11-11T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.4.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-10-30 14:42:31.410"} +Info: {"upstream_id":7277814241002263370,"namespace":"default","id":"simple-replication-task","sink_uri":"pulsar://127.0.0.1:6650/consumer-test?protocol=canal-json","create_time":"2024-11-11T14:42:32.000904+08:00","start_ts":444203257406423044,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pulsar_test.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"dispatchers":[{"matcher":["pulsar_test.*"],"partition":"","topic":"test_{schema}_{table}"}],"encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"pulsar_config":{"connection-timeout":30,"operation-timeout":30,"batching-max-messages":1000,"batching-max-publish-delay":10,"send-timeout":30},"advance_timeout":150},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v8.4.0","resolved_ts":444203257406423044,"checkpoint_ts":444203257406423044,"checkpoint_time":"2024-11-11 14:42:31.410"} ``` The meaning of each parameter is as follows: From 7b0164ffc60665a8609f47a9d6729eb86fc8c223 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 4 Nov 2024 10:12:40 +0800 Subject: [PATCH 4/8] Update upgrade-tidb-using-tiup.md --- upgrade-tidb-using-tiup.md | 1 - 1 file changed, 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index f887db856b792..9aac3617318ea 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -21,7 +21,6 @@ This document is targeted for the following upgrade paths: > 3. **DO NOT** upgrade a TiDB cluster when a DDL statement is being executed in the cluster (usually for the time-consuming DDL statements such as `ADD INDEX` and the column type changes). Before the upgrade, it is recommended to use the [`ADMIN SHOW DDL`](/sql-statements/sql-statement-admin-show-ddl.md) command to check whether the TiDB cluster has an ongoing DDL job. If the cluster has a DDL job, to upgrade the cluster, wait until the DDL execution is finished or use the [`ADMIN CANCEL DDL`](/sql-statements/sql-statement-admin-cancel-ddl.md) command to cancel the DDL job before you upgrade the cluster. > 4. If the TiDB version before upgrade is 7.1.0 or later, you can ignore the preceding warnings 2 and 3. For more information, see [limitations on using TiDB smooth upgrade](/smooth-upgrade-tidb.md#limitations). > 5. Be sure to read [limitations on user operations](/smooth-upgrade-tidb.md#limitations-on-user-operations) before upgrading your TiDB cluster using TiUP. -> 6. If the version of your current TiDB cluster is TiDB v7.6.0 to v8.2.0, the upgrade target version is v8.3.0 or later, and accelerated table creation is enabled via [`tidb_enable_fast_create_table`](/system-variables.md#tidb_enable_fast_create_table-new-in-v800), you need to first disable the accelerated table creation feature, and then enable it as needed after the upgrade is completed. Otherwise, some metadata KVs added by this feature remain in the cluster. Starting from v8.3.0, this feature is optimized. Upgrading to a later TiDB version no longer generates and retains this type of metadata KVs. > **Note:** > From f09fdeef32b616bce8510d5bc96c784434e8d8ba Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 4 Nov 2024 11:26:48 +0800 Subject: [PATCH 5/8] Apply suggestions from code review --- upgrade-tidb-using-tiup.md | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 9aac3617318ea..202efd93b8381 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -6,13 +6,8 @@ aliases: ['/docs/dev/upgrade-tidb-using-tiup/','/docs/dev/how-to/upgrade/using-t # Upgrade TiDB Using TiUP -This document is targeted for the following upgrade paths: +This document applies to upgrading to TiDB 8.4.0 from the following: v6.1.x, v6.5.x, v7.1.x, v7.5.x, v8.1.x, v8.2.0, v8.3.0 -- Upgrade from TiDB 4.0 versions to TiDB 8.4. -- Upgrade from TiDB 5.0-5.4 versions to TiDB 8.4. -- Upgrade from TiDB 6.0-6.6 to TiDB 8.4. -- Upgrade from TiDB 7.0-7.6 to TiDB 8.4. -- Upgrade from TiDB 8.0-8.3 to TiDB 8.4. > **Warning:** > @@ -24,7 +19,6 @@ This document is targeted for the following upgrade paths: > **Note:** > -> - If your cluster to be upgraded is v3.1 or an earlier version (v3.0 or v2.1), the direct upgrade to v8.4.0 is not supported. You need to upgrade your cluster first to v4.0 and then to v8.4.0. > - If your cluster to be upgraded is earlier than v6.2, the upgrade might get stuck when you upgrade the cluster to v6.2 or later versions in some scenarios. You can refer to [How to fix the issue](#how-to-fix-the-issue-that-the-upgrade-gets-stuck-when-upgrading-to-v620-or-later-versions). > - TiDB nodes use the value of the [`server-version`](/tidb-configuration-file.md#server-version) configuration item to verify the current TiDB version. Therefore, to avoid unexpected behaviors, before upgrading the TiDB cluster, you need to set the value of `server-version` to empty or the real version of the current TiDB cluster. > - Setting the [`performance.force-init-stats`](/tidb-configuration-file.md#force-init-stats-new-in-v657-and-v710) configuration item to `ON` prolongs the TiDB startup time, which might cause startup timeouts and upgrade failures. To avoid this issue, it is recommended to set a longer waiting timeout for TiUP. @@ -57,11 +51,6 @@ This document is targeted for the following upgrade paths: - TiDB currently does not support version downgrade or rolling back to an earlier version after the upgrade. - For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (`tiup cluster`) for new management according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). Then you can upgrade the cluster to v8.4.0 according to this document. -- To update versions earlier than v3.0 to v8.4.0: - 1. Update this version to 3.0 using [TiDB Ansible](https://docs.pingcap.com/tidb/v3.0/upgrade-tidb-using-ansible). - 2. Use TiUP (`tiup cluster`) to import the TiDB Ansible configuration. - 3. Update the 3.0 version to 4.0 according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). - 4. Upgrade the cluster to v8.4.0 according to this document. - Support upgrading the versions of TiCDC, TiFlash, and other components. - When upgrading TiFlash from versions earlier than v6.3.0 to v6.3.0 and later versions, note that the CPU must support the AVX2 instruction set under the Linux AMD64 architecture and the ARMv8 instruction set architecture under the Linux ARM64 architecture. For details, see the description in [v6.3.0 Release Notes](/releases/release-6.3.0.md#others). - For detailed compatibility changes of different versions, see the [Release Notes](/releases/release-notes.md) of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes. @@ -162,9 +151,6 @@ Now, the offline mirror has been upgraded successfully. If an error occurs durin 3. After the modification, enter : + w + q to save the change and exit the editing mode. Enter Y to confirm the change. -> **Note:** -> -> Before you upgrade the cluster to v6.6.0, make sure that the parameters you have modified in v4.0 are compatible in v8.4.0. For details, see [TiKV Configuration File](/tikv-configuration-file.md). ### Step 4: Check the DDL and backup status of the cluster From 2aa6d1fb5ce1bdadf2e6765323561cb4935fbac6 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Mon, 4 Nov 2024 11:31:10 +0800 Subject: [PATCH 6/8] Update upgrade-tidb-using-tiup.md Co-authored-by: Frank945946 <108602632+Frank945946@users.noreply.github.com> --- upgrade-tidb-using-tiup.md | 1 - 1 file changed, 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 202efd93b8381..83d5b454d2c0a 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -50,7 +50,6 @@ This document applies to upgrading to TiDB 8.4.0 from the following: v6.1.x, v6. ## Upgrade caveat - TiDB currently does not support version downgrade or rolling back to an earlier version after the upgrade. -- For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (`tiup cluster`) for new management according to [Upgrade TiDB Using TiUP (v4.0)](https://docs.pingcap.com/tidb/v4.0/upgrade-tidb-using-tiup#import-tidb-ansible-and-the-inventoryini-configuration-to-tiup). Then you can upgrade the cluster to v8.4.0 according to this document. - Support upgrading the versions of TiCDC, TiFlash, and other components. - When upgrading TiFlash from versions earlier than v6.3.0 to v6.3.0 and later versions, note that the CPU must support the AVX2 instruction set under the Linux AMD64 architecture and the ARMv8 instruction set architecture under the Linux ARM64 architecture. For details, see the description in [v6.3.0 Release Notes](/releases/release-6.3.0.md#others). - For detailed compatibility changes of different versions, see the [Release Notes](/releases/release-notes.md) of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes. From f6f46a0fd1e91826d13996ca042c664bc2468253 Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 4 Nov 2024 16:57:56 +0800 Subject: [PATCH 7/8] Apply suggestions from code review --- upgrade-tidb-using-tiup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 83d5b454d2c0a..5548b78f9465c 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -6,7 +6,7 @@ aliases: ['/docs/dev/upgrade-tidb-using-tiup/','/docs/dev/how-to/upgrade/using-t # Upgrade TiDB Using TiUP -This document applies to upgrading to TiDB 8.4.0 from the following: v6.1.x, v6.5.x, v7.1.x, v7.5.x, v8.1.x, v8.2.0, v8.3.0 +This document applies to upgrading to TiDB v8.4.0 from the following versions: v6.1.x, v6.5.x, v7.1.x, v7.5.x, v8.1.x, v8.2.0, and v8.3.0 > **Warning:** From 56b00be445b62d01b65811bc66f6d2989944ca2b Mon Sep 17 00:00:00 2001 From: Aolin Date: Mon, 4 Nov 2024 16:58:39 +0800 Subject: [PATCH 8/8] make ci happy --- upgrade-tidb-using-tiup.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 5548b78f9465c..76025d2acc84f 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -8,7 +8,6 @@ aliases: ['/docs/dev/upgrade-tidb-using-tiup/','/docs/dev/how-to/upgrade/using-t This document applies to upgrading to TiDB v8.4.0 from the following versions: v6.1.x, v6.5.x, v7.1.x, v7.5.x, v8.1.x, v8.2.0, and v8.3.0 - > **Warning:** > > 1. You cannot upgrade TiFlash online from versions earlier than 5.3 to 5.3 or later. Instead, you must first stop all the TiFlash instances of the early version, and then upgrade the cluster offline. If other components (such as TiDB and TiKV) do not support an online upgrade, follow the instructions in warnings in [Online upgrade](#online-upgrade). @@ -150,7 +149,6 @@ Now, the offline mirror has been upgraded successfully. If an error occurs durin 3. After the modification, enter : + w + q to save the change and exit the editing mode. Enter Y to confirm the change. - ### Step 4: Check the DDL and backup status of the cluster To avoid undefined behaviors or other unexpected problems during the upgrade, it is recommended to check the following items before the upgrade.