Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ParquetExec and related documentation #10647

Merged
merged 8 commits into from
May 28, 2024

Conversation

alamb
Copy link
Contributor

@alamb alamb commented May 24, 2024

Which issue does this PR close?

Part of #10549

Rationale for this change

While trying to make an example that uses ParquetExec, I found its documentation was sparse and could be improved

What changes are included in this PR?

  1. Add docstrings to ParquetExec and related structs explaining more of what they do and how they work

Are these changes tested?

By existing CI

Are there any user-facing changes?

Just doc strings, no functional changes

@github-actions github-actions bot added the core Core DataFusion crate label May 24, 2024
@alamb alamb marked this pull request as ready for review May 24, 2024 09:39
@alamb alamb added the documentation Improvements or additions to documentation label May 24, 2024
@alamb
Copy link
Contributor Author

alamb commented May 24, 2024

@thinkharderdev , @tustvold, @Ted-Jiang and @crepererum: if you have time, could you double check that this correctly describes ParquetExec to your understanding?

Comment on lines 110 to 111
/// * Multi-threaded (aka multi-partition): read from one or more files in
/// parallel. Can read concurrently from multiple row groups from a single file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would call this "concurrency" instead of "multi-threading". IIRC we don't implement ANY threading in this operator and solely rely on tokio to dispatch concurrent bits for us. I think it's fine to mention that the concurrency in this operator CAN lead to multi-core usage under specific circumstances.

/// table schema. This can be used to implement "schema evolution". See
/// [`SchemaAdapterFactory`] for more details.
///
/// * metadata_size_hint: controls the number of bytes read from the end of the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW this is passed on to the reader (custom or builtin) and the reader uses that to gather the metadata. The reader CAN however use another more precise source for this information or not read the metadata from object store at all (e.g. it could use an extra service, a dataset-based source or some sort of cache).

///
/// * Limit pushdown: stop execution early after some number of rows are read.
///
/// * Custom readers: controls I/O for accessing pages. See
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// * Custom readers: controls I/O for accessing pages. See
/// * Custom readers: implements I/O for accessing byte ranges and the metadata object. See

It's not steering the IO process, it's actually responsible for performing (or not performing) it. For example, a custom impl. could totally NOT use an object store (which is esp. interesting for the metadata bit, see other comment below).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call -- updated

Comment on lines 139 to 141
/// * Step 3: The `ParquetOpener` gets the file metadata by reading the footer,
/// and applies any predicates and projections to determine what pages must be
/// read.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It gets the metadata from the ParquetFileReaderFactory or more specifically the AsyncFileReader that this factory returns. The ParquetOpener doesn't care where the metadata comes from.

/// Interface for creating [`AsyncFileReader`]s to read parquet files.
///
/// This interface is used by [`ParquetOpener`] in order to create readers for
/// parquet files. Implementations of this trait can be used to provide custom
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's "this trait" in this case? I guess you're referring to AsyncFileReader, not ParquetFileReaderFactory here. To avoid confusion and give the user more freedom how/where the implement "pre-cached data, I/O ..." etc., I suggest to start a new paragraph and say:

The combined implementations of [`ParquetFileReaderFactory`] and [`AsyncFileReader`]
can be used to provide custom data access operations such as
pre-cached data, I/O coalescing, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent idea. I did so

@github-actions github-actions bot removed the documentation Improvements or additions to documentation label May 25, 2024
Copy link
Contributor

@comphead comphead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm thanks @alamb

@alamb alamb merged commit 7f0e194 into apache:main May 28, 2024
23 checks passed
@alamb alamb deleted the alamb/parquet_docs branch May 28, 2024 00:14
findepi pushed a commit to findepi/datafusion that referenced this pull request Jul 16, 2024
* Improve ParquetExec and related documentation

* Improve documentation

* Update datafusion/core/src/datasource/physical_plan/parquet/schema_adapter.rs

Co-authored-by: Oleks V <[email protected]>

* fix link

---------

Co-authored-by: Oleks V <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core DataFusion crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants