diff --git a/ecosystem-tool-user-guide.md b/ecosystem-tool-user-guide.md index 309ae803f6b2e..d34973137168c 100644 --- a/ecosystem-tool-user-guide.md +++ b/ecosystem-tool-user-guide.md @@ -90,7 +90,7 @@ The following are the basics of TiDB Lightning: - Data source: - The output files of Dumpling - Other compatible CSV files - - Parquet files exported from Amazon Aurora or Apache Hive + - Parquet files exported from Amazon Aurora, Apache Hive, or Snowflake - Supported TiDB versions: v2.1 and later versions - Kubernetes support: Yes. See [Quickly restore data into a TiDB cluster on Kubernetes using TiDB Lightning](https://docs.pingcap.com/tidb-in-kubernetes/stable/restore-data-using-tidb-lightning) for details. diff --git a/migration-tools.md b/migration-tools.md index 6101f9b8cdc6c..ec063d14f3626 100644 --- a/migration-tools.md +++ b/migration-tools.md @@ -29,7 +29,7 @@ This document introduces the user scenarios, supported upstreams and downstreams - **User scenario**: Full data import into TiDB - **Upstream (the imported source file)**: - Files exported from Dumpling - - Parquet files exported by Amazon Aurora or Apache Hive + - Parquet files exported by Amazon Aurora, Apache Hive, and Snowflake - CSV files - Data from local disks or Amazon S3 - **Downstream**: TiDB diff --git a/tidb-lightning/tidb-lightning-data-source.md b/tidb-lightning/tidb-lightning-data-source.md index 579ea28672769..4abd184210b34 100644 --- a/tidb-lightning/tidb-lightning-data-source.md +++ b/tidb-lightning/tidb-lightning-data-source.md @@ -341,7 +341,7 @@ When TiDB Lightning processes a SQL file, because TiDB Lightning cannot quickly ## Parquet -TiDB Lightning currently only supports Parquet files generated by Amazon Aurora or Apache Hive. To identify the file structure in S3, use the following configuration to match all data files: +TiDB Lightning currently only supports Parquet files generated by Amazon Aurora, Apache Hive, and Snowflake. To identify the file structure in S3, use the following configuration to match all data files: ``` [[mydumper.files]] diff --git a/tidb-lightning/tidb-lightning-faq.md b/tidb-lightning/tidb-lightning-faq.md index 1bad6afa2ac0b..b37f531a9cb45 100644 --- a/tidb-lightning/tidb-lightning-faq.md +++ b/tidb-lightning/tidb-lightning-faq.md @@ -51,7 +51,7 @@ ADMIN CHECKSUM TABLE `schema`.`table`; TiDB Lightning supports: -- Importing files exported by [Dumpling](/dumpling-overview.md), CSV files, and [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md). +- Importing files exported by [Dumpling](/dumpling-overview.md), CSV files, and [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md), Apache Hive, and Snowflake. - Reading data from a local disk or from the Amazon S3 storage. ## Could TiDB Lightning skip creating schema and tables? diff --git a/tidb-lightning/tidb-lightning-overview.md b/tidb-lightning/tidb-lightning-overview.md index e9b6d69cf02ad..a7d1d2e41e8c1 100644 --- a/tidb-lightning/tidb-lightning-overview.md +++ b/tidb-lightning/tidb-lightning-overview.md @@ -11,7 +11,7 @@ TiDB Lightning supports the following file formats: - Files exported by [Dumpling](/dumpling-overview.md) - CSV files -- [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md) +- [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md), Apache Hive, or Snowflake TiDB Lightning can read data from the following sources: