-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Write Options in DataFrame::write_* methods #7435
Conversation
12e2301
to
48d4387
Compare
Thanks @devinjdangelo, this looks good. Could you add a test that writes a Parquet file with compression via this API and test that the file is written with the correct compression? Perhaps based on the |
@andygrove I just pushed up a test to verify parquet files are written with the expected compression. Let me know if this looks like what you were looking for. |
datafusion/core/src/dataframe.rs
Outdated
DataFrameWriteOptions::new().with_single_file_output(true), | ||
Some( | ||
WriterProperties::builder() | ||
.set_compression(parquet::basic::Compression::SNAPPY) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test looks good. I wonder if it is worth looping over a list of supported compressions rather than just testing this with one codec? If the default changed to SNAPPY
in the future then this test would not really be testing that the WriterProperties value is respected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expanded the test to include all supported compression codecs. We unfortunately cannot test all compression levels for those codecs that support levels, since in general they do not include the used compression level in the file metadata. The parquet crate reader always reports the compression level as the "default" level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relevant lines in parquet crate:
I suppose outside of testing, there is no compelling reason for the parquet file to store the compression level that was used for each column chunk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks @devinjdangelo
Thank you @devinjdangelo and @andygrove ! |
Which issue does this PR close?
Closes #7433
Rationale for this change
#7390 adds support for configuring Write related options via SQL statements. This PR extends that to allow configuring the same write related options via
DataFrame::write_*
methods. These methods acceptparquet::WriterProperties
orcsv::WriterBuilder
directly rather than requiring passing of string tuples.What changes are included in this PR?
Are these changes tested?
Yes via existing tests
Are there any user-facing changes?
Yes, DataFrame write APIs are updated.