From 3f7a8ec0bde8ffbb73d050db9d5a314a1b90e365 Mon Sep 17 00:00:00 2001 From: Ti Chi Robot Date: Thu, 23 Nov 2023 13:11:41 +0800 Subject: [PATCH] Clarify statement on backup-and-restore-using-dumpling page (#15423) (#15429) --- ...up-and-restore-using-dumpling-lightning.md | 45 ++++++++++++++----- dm/dm-hardware-and-software-requirements.md | 2 +- migrate-large-mysql-to-tidb.md | 2 +- tidb-lightning/tidb-lightning-requirements.md | 2 +- 4 files changed, 37 insertions(+), 14 deletions(-) diff --git a/backup-and-restore-using-dumpling-lightning.md b/backup-and-restore-using-dumpling-lightning.md index fa4942a0c17af..640969e2a47c8 100644 --- a/backup-and-restore-using-dumpling-lightning.md +++ b/backup-and-restore-using-dumpling-lightning.md @@ -7,20 +7,22 @@ summary: Learn how to use Dumpling and TiDB Lightning to back up and restore ful This document introduces how to use Dumpling and TiDB Lightning to back up and restore full data of TiDB. -If you need to back up a small amount of data (for example, less than 50 GB) and do not require high backup speed, you can use [Dumpling](/dumpling-overview.md) to export data from the TiDB database and then use [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) to import the data into another TiDB database. For more information about backup and restore, see [TiDB Backup & Restore Overview](/br/backup-and-restore-overview.md). +If you need to back up a small amount of data (for example, less than 50 GiB) and do not require high backup speed, you can use [Dumpling](/dumpling-overview.md) to export data from the TiDB database and then use [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) to restore the data into another TiDB database. + +If you need to back up larger databases, the recommended method is to use [BR](/br/backup-and-restore-overview.md). Note that Dumpling can be used to export large databases, but BR is a better tool for that. ## Requirements -- Install and start Dumpling: +- Install Dumpling: ```shell - tiup install dumpling && tiup dumpling + tiup install dumpling ``` -- Install and start TiDB Lightning: +- Install TiDB Lightning: ```shell - tiup install tidb lightning && tiup tidb lightning + tiup install tidb-lightning ``` - [Grant the source database privileges required for Dumpling](/dumpling-overview.md#export-data-from-tidb-or-mysql) @@ -41,14 +43,35 @@ If you need to save data of one backup task to the local disk, note the followin - Dumpling requires a disk space that can store the whole data source (or to store all upstream tables to be exported). To calculate the required space, see [Downstream storage space requirements](/tidb-lightning/tidb-lightning-requirements.md#storage-space-of-the-target-database). - During the import, TiDB Lightning needs temporary space to store the sorted key-value pairs. The disk space should be enough to hold the largest single table from the data source. -**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `data-length` field in the `information_schema.tables` table: +**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `DATA_LENGTH` field in the `information_schema.tables` table: ```sql -/* Calculate the size of all schemas, in MiB. Replace ${schema_name} with your schema name. */ -SELECT table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(index_length)/1024/1024 AS index_length,SUM(data_length+index_length)/1024/1024 AS SUM FROM information_schema.tables WHERE table_schema = "${schema_name}" GROUP BY table_schema; - -/* Calculate the size of the largest table, in MiB. Replace ${schema_name} with your schema name. */ -SELECT table_name,table_schema,SUM(data_length)/1024/1024 AS data_length,SUM(index_length)/1024/1024 AS index_length,SUM(data_length+index_length)/1024/1024 AS SUM from information_schema.tables WHERE table_schema = "${schema_name}" GROUP BY table_name,table_schema ORDER BY SUM DESC LIMIT 5; +-- Calculate the size of all schemas +SELECT + TABLE_SCHEMA, + FORMAT_BYTES(SUM(DATA_LENGTH)) AS 'Data Size', + FORMAT_BYTES(SUM(INDEX_LENGTH)) 'Index Size' +FROM + information_schema.tables +GROUP BY + TABLE_SCHEMA; + +-- Calculate the 5 largest tables +SELECT + TABLE_NAME, + TABLE_SCHEMA, + FORMAT_BYTES(SUM(data_length)) AS 'Data Size', + FORMAT_BYTES(SUM(index_length)) AS 'Index Size', + FORMAT_BYTES(SUM(data_length+index_length)) AS 'Total Size' +FROM + information_schema.tables +GROUP BY + TABLE_NAME, + TABLE_SCHEMA +ORDER BY + SUM(DATA_LENGTH+INDEX_LENGTH) DESC +LIMIT + 5; ``` ### Disk space for the target TiKV cluster diff --git a/dm/dm-hardware-and-software-requirements.md b/dm/dm-hardware-and-software-requirements.md index 891239c56c34c..4c73979bb97a5 100644 --- a/dm/dm-hardware-and-software-requirements.md +++ b/dm/dm-hardware-and-software-requirements.md @@ -54,7 +54,7 @@ The target TiKV cluster must have enough disk space to store the imported data. - Indexes might take extra space. - RocksDB has a space amplification effect. -You can estimate the data volume by using the following SQL statements to summarize the `data-length` field: +You can estimate the data volume by using the following SQL statements to summarize the `DATA_LENGTH` field: - Calculate the size of all schemas, in MiB. Replace `${schema_name}` with your schema name. diff --git a/migrate-large-mysql-to-tidb.md b/migrate-large-mysql-to-tidb.md index c7d86294e7723..5f687d214d00a 100644 --- a/migrate-large-mysql-to-tidb.md +++ b/migrate-large-mysql-to-tidb.md @@ -29,7 +29,7 @@ This document describes how to perform the full migration using Dumpling and TiD - During the import, TiDB Lightning needs temporary space to store the sorted key-value pairs. The disk space should be enough to hold the largest single table from the data source. - If the full data volume is large, you can increase the binlog storage time in the upstream. This is to ensure that the binlogs are not lost during the incremental replication. -**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `data-length` field in the `information_schema.tables` table: +**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `DATA_LENGTH` field in the `information_schema.tables` table: {{< copyable "" >}} diff --git a/tidb-lightning/tidb-lightning-requirements.md b/tidb-lightning/tidb-lightning-requirements.md index c7897a9794aba..86c9bb53af71f 100644 --- a/tidb-lightning/tidb-lightning-requirements.md +++ b/tidb-lightning/tidb-lightning-requirements.md @@ -84,7 +84,7 @@ The target TiKV cluster must have enough disk space to store the imported data. - Indexes might take extra space. - RocksDB has a space amplification effect. -It is difficult to calculate the exact data volume exported by Dumpling from MySQL. However, you can estimate the data volume by using the following SQL statement to summarize the data-length field in the information_schema.tables table: +It is difficult to calculate the exact data volume exported by Dumpling from MySQL. However, you can estimate the data volume by using the following SQL statement to summarize the `DATA_LENGTH` field in the information_schema.tables table: Calculate the size of all schemas, in MiB. Replace ${schema_name} with your schema name.