Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add all, any and null_count Spark Expressions #1724

Merged
merged 19 commits into from
Jan 10, 2025

Conversation

lucas-nelson-uiuc
Copy link
Contributor

What type of PR is this? (check all applicable)

  • 💾 Refactor
  • ✨ Feature
  • 🐛 Bug Fix
  • 🔧 Optimization
  • 📝 Documentation
  • ✅ Test
  • 🐳 Other

Related issues

Checklist

  • Code follows style guide (ruff)
  • Tests added
  • Documented the changes

If you have comments or can explain your changes, please do so below

Added the following methods to SparkLikeExpr and SparkLikeNamespace:

  • any
  • all
  • null_count
  • any_horizontal

Copied respective tests over - couldn't run them without Java on my machine but running them locally on their respective test datasets worked for me.

Let me know if anything needs to be updated!

@lucas-nelson-uiuc lucas-nelson-uiuc changed the title Missing spark expr feat: add more Spark Expressions Jan 4, 2025
Copy link
Collaborator

@EdAbati EdAbati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for looking into this! 🙏

tests/spark_like_test.py Outdated Show resolved Hide resolved
narwhals/_spark_like/expr.py Outdated Show resolved Hide resolved
@lucas-nelson-uiuc
Copy link
Contributor Author

Hey @EdAbati ,

Took an initial swing at implementing the replace_strict() method - think I took care of everything except for handling the test_replace_non_full test (checks that replacement is exhaustive). Left some thoughts and other questions in my commit - lmk what you think!

Copy link
Collaborator

@EdAbati EdAbati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating! :)

I left another couple of small comments

IMO we can just merge all, any and null_count here and worry about the rest in a follow up

narwhals/_spark_like/expr.py Outdated Show resolved Hide resolved
narwhals/_spark_like/expr.py Outdated Show resolved Hide resolved

return self._from_call(_null_count, "null_count", returns_scalar=True)

def replace_strict(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am tempted to say that this should not be implemented for now and just raise a NotImplementedError. (as we do in Dask)
We would need to be able to access the dataframe (and collect the results) to get the distinct values of the column.

@FBruzzesi and @MarcoGorelli any thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am tempted to say that this should not be implemented for now and just raise a NotImplementedError. (as we do in Dask)

Sure we can evaluate if and how to support replace_strict later on. Super excited to ship the rest for now 🙌🏼

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree!

lucas-nelson-uiuc and others added 2 commits January 9, 2025 14:38
Co-authored-by: Edoardo Abati <[email protected]>
Co-authored-by: Edoardo Abati <[email protected]>
@lucas-nelson-uiuc
Copy link
Contributor Author

Thanks everyone - committed the changes above. Lmk if anything else needs to be changed

@MarcoGorelli
Copy link
Member

awesome, thanks! just looks like there's some merge conflicts

@lucas-nelson-uiuc
Copy link
Contributor Author

Might've committed some git crimes resolving that merge conflict :') apologies in advance

Comment on lines -27 to +22
result = df.select(nw.all().any())
result = df.select(nw.col("a", "b", "c").any())
Copy link
Member

@FBruzzesi FBruzzesi Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MarcoGorelli pyspark fails to maintain the original name with nw.all().<expression_method>. @EdAbati maybe you know why and where this is happening already.

Here it would result in:

FAILED tests/expr_and_series/any_all_test.py::test_any_all[pyspark] - AssertionError: Expected column name a at index 0, found bool_or(a)

Copy link
Member

@FBruzzesi FBruzzesi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's ship this🚀 ! Thanks @lucas-nelson-uiuc and apologies for such a long back and forth on this PR

NarwhalsOceanGIF

@FBruzzesi FBruzzesi changed the title feat: add more Spark Expressions feat: add all, any and null_count Spark Expressions Jan 10, 2025
@FBruzzesi FBruzzesi merged commit 0c98b60 into narwhals-dev:main Jan 10, 2025
24 checks passed
@lucas-nelson-uiuc lucas-nelson-uiuc deleted the missing-spark-expr branch January 10, 2025 23:25
@MarcoGorelli MarcoGorelli added the enhancement New feature or request label Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants