Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spark] Fix reported statistics does not do column pruning #4137

Merged
merged 2 commits into from
Sep 13, 2024

Conversation

ulysses-you
Copy link
Contributor

Purpose

SupportsReportStatistics#estimateStatistics should report the statistics after do column pruning, filter push down. This pr fixes it does not do column pruning. In addition, we should consider the metadata column size.

Introduce defaultSize method for DataType, so that we can get the size in bytes without column statistics.

Tests

add test

API and Format

no

Documentation

@ulysses-you
Copy link
Contributor Author

cc @JingsongLi @Zouxxyy thank you

@JingsongLi
Copy link
Contributor

@Zouxxyy We may need to run a TPC-DS test to show the compact.

@Zouxxyy
Copy link
Contributor

Zouxxyy commented Sep 11, 2024

@ulysses-you Thanks for the contribution, have you tested the effect in production scenarios?
We tested the old codes and it had the same effect as parquet's analyze.

@ulysses-you
Copy link
Contributor Author

thank you @Zouxxyy. I tested with 20 columns and using explain cost to see the sinzeInBytes after doing analyze, it does affect:

without this pr:

explain cost select c1, c2, c3 from w_t;

== Optimized Logical Plan ==
RelationV2[c1#163, c2#164, c3#165] default.w_t, Statistics(sizeInBytes=36.8 MiB, rowCount=1.00E+5)

explain cost select * from w_t;

== Optimized Logical Plan ==
RelationV2[c1#194, c2#195, c3#196, c4#197, c5#198, c6#199, c7#200, c8#201, c9#202, c10#203, c11#204, c12#205, c13#206, c14#207, c15#208, c16#209, c17#210, c18#211, c19#212, c20#213] default.w_t, Statistics(sizeInBytes=36.8 MiB, rowCount=1.00E+5)

with this pr:

explain cost select c1, c2, c3 from w_t;

== Optimized Logical Plan ==
RelationV2[c1#5, c2#6, c3#7] default.w_t, Statistics(sizeInBytes=5.5 MiB, rowCount=1.00E+5)

explain cost select * from w_t;

== Optimized Logical Plan ==
RelationV2[c1#36, c2#37, c3#38, c4#39, c5#40, c6#41, c7#42, c8#43, c9#44, c10#45, c11#46, c12#47, c13#48, c14#49, c15#50, c16#51, c17#52, c18#53, c19#54, c20#55] default.w_t, Statistics(sizeInBytes=36.8 MiB, rowCount=1.00E+5)

In general, the sinzeInBytes would affect broadcast join, runtime filter, etc. So it can happen that, nothing changes with pr if the query does not hit the related case.

.map(_.avgLen())
.filter(_.isPresent)
.map(_.getAsLong)
.getOrElse(field.`type`().defaultSize().toLong)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the default size of Spark's filed here? because the data will actually be converted to Spark row.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to use Paimon's data type default size. During scan we have not converted to Spark row yet. Paimon returns SparkInternalRow to Spark which is maintained using Paimon's memory structure. The row conversion happens when Spark would wrap a Project. Then if there is a Project, Spark would re-calculate the sizeInBytes using Spark's data type default size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it!

Copy link
Contributor

@Zouxxyy Zouxxyy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@JingsongLi
Copy link
Contributor

+1

@JingsongLi JingsongLi merged commit 5da3ff5 into apache:master Sep 13, 2024
9 of 10 checks passed
@ulysses-you ulysses-you deleted the statistics branch September 13, 2024 07:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants