Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Write Options in DataFrame::write_* methods #7435

Merged
merged 6 commits into from
Sep 6, 2023

Conversation

devinjdangelo
Copy link
Contributor

Which issue does this PR close?

Closes #7433

Rationale for this change

#7390 adds support for configuring Write related options via SQL statements. This PR extends that to allow configuring the same write related options via DataFrame::write_* methods. These methods accept parquet::WriterProperties or csv::WriterBuilder directly rather than requiring passing of string tuples.

What changes are included in this PR?

  • LogicalPlan::Copy(CopyTo) updated to allow passing of StatementOptions or FileTypeWriterOptions directly.
  • DataFrame::write_* methods updated to allow passing options and parsing them into a CopyTo logical plan.

Are these changes tested?

Yes via existing tests

Are there any user-facing changes?

Yes, DataFrame write APIs are updated.

@github-actions github-actions bot added sql SQL Planner logical-expr Logical plan and expressions core Core DataFusion crate sqllogictest SQL Logic Tests (.slt) labels Aug 28, 2023
@github-actions github-actions bot removed the sqllogictest SQL Logic Tests (.slt) label Aug 29, 2023
@devinjdangelo devinjdangelo marked this pull request as ready for review August 29, 2023 12:22
@andygrove
Copy link
Member

Thanks @devinjdangelo, this looks good. Could you add a test that writes a Parquet file with compression via this API and test that the file is written with the correct compression? Perhaps based on the test_write_compressed_parquet test in the Python bindings?

@devinjdangelo
Copy link
Contributor Author

@andygrove I just pushed up a test to verify parquet files are written with the expected compression. Let me know if this looks like what you were looking for.

DataFrameWriteOptions::new().with_single_file_output(true),
Some(
WriterProperties::builder()
.set_compression(parquet::basic::Compression::SNAPPY)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test looks good. I wonder if it is worth looping over a list of supported compressions rather than just testing this with one codec? If the default changed to SNAPPY in the future then this test would not really be testing that the WriterProperties value is respected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expanded the test to include all supported compression codecs. We unfortunately cannot test all compression levels for those codecs that support levels, since in general they do not include the used compression level in the file metadata. The parquet crate reader always reports the compression level as the "default" level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relevant lines in parquet crate:

https://github.com/apache/arrow-rs/blob/eeba0a3792a2774dee1d10a25340b2741cf95c9e/parquet/src/file/metadata.rs#L640

https://github.com/apache/arrow-rs/blob/eeba0a3792a2774dee1d10a25340b2741cf95c9e/parquet/src/format.rs#L3495

I suppose outside of testing, there is no compelling reason for the parquet file to store the compression level that was used for each column chunk.

Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @devinjdangelo

@andygrove andygrove merged commit 87eb126 into apache:main Sep 6, 2023
21 checks passed
@alamb
Copy link
Contributor

alamb commented Sep 6, 2023

Thank you @devinjdangelo and @andygrove !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core DataFusion crate logical-expr Logical plan and expressions sql SQL Planner
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Regression: write_parquet no longer supports compression options
3 participants