You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.
By using ZSTD compressed Parquet format, I was able to go from 25GB to 16GB for metadata-rc1.parquet.
The walkthrough below also renames some columns for clarity.
DuckDB stores all row groups in a single Parquet file, whereas Spark writes row groups to separate files within a directory.
$ aria2c https://dl.fbaipublicfiles.com/esmatlas/v2023_02/metadata-rc1.parquet
$ duckdb
PRAGMA temp_directory='/tmp';
create view esm as select * from read_parquet("metadata-rc1.parquet");
create view metadata as select id, ptm as predicted_tm_score, plddt as predicted_average_lddt, plddt is not null as is_folded, num_conf as confident_residues, len as total_residues, is_fragment, sequenceChecksum as checksum, esmfold_version, atlas_version from esm;
copy metadata to 'metadata-rc1-duckdb.parquet' (FORMAT 'parquet', COMPRESSION 'ZSTD', ROW_GROUP_SIZE 100000);
$ spark-shell
val df = spark.read.parquet("metadata-rc1-duckdb.parquet")
df.write.option("compression", "zstd").parquet("metadata-rc1-spark.parquet")
$ du -h *
25G metadata-rc1.parquet # original parquet file
16G metadata-rc1-duckdb.parquet # one parquet file with many row groups
16G metadata-rc1-spark.parquet # directory with one parquet file per row group
$ ls metadata-rc1-spark.parquet | head -n 4
_SUCCESS
part-00000-f13eb4ef-a8bf-4869-b8fb-7265878582ac-c000.zstd.parquet
part-00001-f13eb4ef-a8bf-4869-b8fb-7265878582ac-c000.zstd.parquet
part-00002-f13eb4ef-a8bf-4869-b8fb-7265878582ac-c000.zstd.parquet
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
By using ZSTD compressed Parquet format, I was able to go from 25GB to 16GB for
metadata-rc1.parquet
.The walkthrough below also renames some columns for clarity.
DuckDB stores all row groups in a single Parquet file, whereas Spark writes row groups to separate files within a directory.
Beta Was this translation helpful? Give feedback.
All reactions