From 5751a40993a4328ef0cf694cc0baa7e0e40bc074 Mon Sep 17 00:00:00 2001 From: xixirangrang <35301108+hfxsd@users.noreply.github.com> Date: Thu, 23 Nov 2023 11:21:28 +0800 Subject: [PATCH] update DATA_LENGTH --- dm/dm-hardware-and-software-requirements.md | 2 +- migrate-large-mysql-to-tidb.md | 2 +- tidb-lightning/tidb-lightning-requirements.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/dm/dm-hardware-and-software-requirements.md b/dm/dm-hardware-and-software-requirements.md index 891239c56c34c..4c73979bb97a5 100644 --- a/dm/dm-hardware-and-software-requirements.md +++ b/dm/dm-hardware-and-software-requirements.md @@ -54,7 +54,7 @@ The target TiKV cluster must have enough disk space to store the imported data. - Indexes might take extra space. - RocksDB has a space amplification effect. -You can estimate the data volume by using the following SQL statements to summarize the `data-length` field: +You can estimate the data volume by using the following SQL statements to summarize the `DATA_LENGTH` field: - Calculate the size of all schemas, in MiB. Replace `${schema_name}` with your schema name. diff --git a/migrate-large-mysql-to-tidb.md b/migrate-large-mysql-to-tidb.md index c7d86294e7723..5f687d214d00a 100644 --- a/migrate-large-mysql-to-tidb.md +++ b/migrate-large-mysql-to-tidb.md @@ -29,7 +29,7 @@ This document describes how to perform the full migration using Dumpling and TiD - During the import, TiDB Lightning needs temporary space to store the sorted key-value pairs. The disk space should be enough to hold the largest single table from the data source. - If the full data volume is large, you can increase the binlog storage time in the upstream. This is to ensure that the binlogs are not lost during the incremental replication. -**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `data-length` field in the `information_schema.tables` table: +**Note**: It is difficult to calculate the exact data volume exported by Dumpling from MySQL, but you can estimate the data volume by using the following SQL statement to summarize the `DATA_LENGTH` field in the `information_schema.tables` table: {{< copyable "" >}} diff --git a/tidb-lightning/tidb-lightning-requirements.md b/tidb-lightning/tidb-lightning-requirements.md index c7897a9794aba..86c9bb53af71f 100644 --- a/tidb-lightning/tidb-lightning-requirements.md +++ b/tidb-lightning/tidb-lightning-requirements.md @@ -84,7 +84,7 @@ The target TiKV cluster must have enough disk space to store the imported data. - Indexes might take extra space. - RocksDB has a space amplification effect. -It is difficult to calculate the exact data volume exported by Dumpling from MySQL. However, you can estimate the data volume by using the following SQL statement to summarize the data-length field in the information_schema.tables table: +It is difficult to calculate the exact data volume exported by Dumpling from MySQL. However, you can estimate the data volume by using the following SQL statement to summarize the `DATA_LENGTH` field in the information_schema.tables table: Calculate the size of all schemas, in MiB. Replace ${schema_name} with your schema name.