From 33813c4ffc01b0a9e62274b823c153dfa775d06c Mon Sep 17 00:00:00 2001 From: Frank945946 Date: Tue, 21 Nov 2023 21:41:22 +0800 Subject: [PATCH] Update sql-statement-import-into.md --- sql-statements/sql-statement-import-into.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sql-statements/sql-statement-import-into.md b/sql-statements/sql-statement-import-into.md index 22aa492d714e0..e5b0fd29b020c 100644 --- a/sql-statements/sql-statement-import-into.md +++ b/sql-statements/sql-statement-import-into.md @@ -36,7 +36,7 @@ The `IMPORT INTO` statement is used to import data in formats such as `CSV`, `SQ - When the [Global Sort](/tidb-global-sort.md) feature is used for data import, the data size of a single row after encoding must not exceed 32 MiB. - When the Global Sort feature is used for data import, if the target TiDB cluster is deleted before the import task is completed, temporary data used for global sorting might remain on Amazon S3. In this case, you need to delete the residual data manually to avoid increasing S3 storage costs. - Ensure that the data to be imported does not contain any records with primary key or non-null unique index conflicts. Otherwise, the conflicts can result in import task failures. -- Known Defect: Avoid executing PD downsizing during data import to prevent import task failure. +- Known issue: The IMPORT INTO task may fail if the PD address in the TiDB node configuration file is inconsistent with the current PD topology of the cluster, for example, due to historical PD scaling in without updating the TiDB configuration file. ## Prerequisites for import