Skip to content

Commit

Permalink
Add lightning import support for parquet format files exported by Sno…
Browse files Browse the repository at this point in the history
…wflake (#15281) (#19836)
  • Loading branch information
ti-chi-bot authored Dec 30, 2024
1 parent 6cd803a commit efad6c4
Show file tree
Hide file tree
Showing 5 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion ecosystem-tool-user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The following are the basics of TiDB Lightning:
- Data source:
- The output files of Dumpling
- Other compatible CSV files
- Parquet files exported from Amazon Aurora or Apache Hive
- Parquet files exported from Amazon Aurora, Apache Hive, or Snowflake
- Supported TiDB versions: v2.1 and later versions
- Kubernetes support: Yes. See [Quickly restore data into a TiDB cluster on Kubernetes using TiDB Lightning](https://docs.pingcap.com/tidb-in-kubernetes/stable/restore-data-using-tidb-lightning) for details.

Expand Down
2 changes: 1 addition & 1 deletion migration-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ This document introduces the user scenarios, supported upstreams and downstreams
- **User scenario**: Full data import into TiDB
- **Upstream (the imported source file)**:
- Files exported from Dumpling
- Parquet files exported by Amazon Aurora or Apache Hive
- Parquet files exported by Amazon Aurora, Apache Hive, and Snowflake
- CSV files
- Data from local disks or Amazon S3
- **Downstream**: TiDB
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/tidb-lightning-data-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ When TiDB Lightning processes a SQL file, because TiDB Lightning cannot quickly

## Parquet

TiDB Lightning currently only supports Parquet files generated by Amazon Aurora or Apache Hive. To identify the file structure in S3, use the following configuration to match all data files:
TiDB Lightning currently only supports Parquet files generated by Amazon Aurora, Apache Hive, and Snowflake. To identify the file structure in S3, use the following configuration to match all data files:

```
[[mydumper.files]]
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/tidb-lightning-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ ADMIN CHECKSUM TABLE `schema`.`table`;

TiDB Lightning supports:

- Importing files exported by [Dumpling](/dumpling-overview.md), CSV files, and [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md).
- Importing files exported by [Dumpling](/dumpling-overview.md), CSV files, and [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md), Apache Hive, and Snowflake.
- Reading data from a local disk or from the Amazon S3 storage.

## Could TiDB Lightning skip creating schema and tables?
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/tidb-lightning-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ TiDB Lightning supports the following file formats:

- Files exported by [Dumpling](/dumpling-overview.md)
- CSV files
- [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md)
- [Apache Parquet files generated by Amazon Aurora](/migrate-aurora-to-tidb.md), Apache Hive, or Snowflake

TiDB Lightning can read data from the following sources:

Expand Down

0 comments on commit efad6c4

Please sign in to comment.