Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations #3650

Open
wants to merge 29 commits into
base: main
Choose a base branch
from

Conversation

watermelonwolverine
Copy link
Contributor

Added stubs for "vector.transfer_read" and "vector.transfer_write".
Verification is missing but that also seems to be a lot of work.

Also fixed a small bug in TensorOrMemrefOf.verify (?)

Verified

This commit was signed with the committer’s verified signature.
Heptazhou Heptazhou
Fixed bug in TensorOrMemrefOf.verify (?)
@watermelonwolverine watermelonwolverine marked this pull request as draft December 17, 2024 23:23
Copy link

codecov bot commented Dec 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 90.19%. Comparing base (3d4d51d) to head (b9d927b).
Report is 16 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3650      +/-   ##
==========================================
+ Coverage   90.17%   90.19%   +0.02%     
==========================================
  Files         458      458              
  Lines       58304    58530     +226     
  Branches     5694     5722      +28     
==========================================
+ Hits        52573    52794     +221     
  Misses       4339     4339              
- Partials     1392     1397       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@compor compor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo comments

if isinstance(attr, VectorType) or isinstance(attr, TensorType):
attr = cast(VectorType[Attribute] | TensorType[Attribute], attr)
if isinstance(attr, MemRefType) or isinstance(attr, TensorType):
attr = cast(MemRefType[Attribute] | TensorType[Attribute], attr)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind moving this to a separate PR with a test case?
We seem to be lacking any verification testing on this type.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing the changes, I'd still recommend doing this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do that in the next few days

@compor compor added the dialects Changes on the dialects label Dec 18, 2024
@compor compor changed the title dialects: (vector) Added stubs for transfer_read and transfer_write dialects: (vector) Add for vector.transfer_read and vector.transfer_write operations Dec 18, 2024
….transfer_write" ops

This blew up and should now be several PRs
…ArrayAttr[BoolAttr] instead of a IntegerAttr
@watermelonwolverine
Copy link
Contributor Author

watermelonwolverine commented Dec 18, 2024

  • I started implementing the verification and finished most of it
    • Only verification for cases where the element_type of source is a vector type is missing
  • Problem: To fully implement verification as it is done in MLIR the VectorType needs to be changed first
    • It needs a bitmap (ArrayAttr[BoolAttr]) for the scalable dimensions instead of a single IntegerAttr for the number of scalable dimensions.
    • I could skip this verification step and do everything else
    • I did some of those changes so you can see for yourself what I mean.
  • I also had to add several functions to builtin classes like AffineMap and AffineExpr.
    • Should I do a separate PR for those, too?

@watermelonwolverine
Copy link
Contributor Author

…f transfer ops from MLIR
@compor
Copy link
Collaborator

compor commented Dec 20, 2024

You can always verify what is currently here and document what deviates (as you have done), this is fine IMHO.

Thanks for the work so far!

Transferred more filecheck tests for transfer ops from MLIR
Added the (now mandatory) in_bounds property
@watermelonwolverine watermelonwolverine marked this pull request as ready for review December 28, 2024 17:48
Comment on lines 484 to 489
# WJOG9GVF: TODO fix: uncomment this when 7S4F0FZA has been fixed
# if mask_type:
# if mask_type != inferred_mask_type:
# raise VerifyException(
# f'"{op.name}" inferred mask type ({inferred_mask_type}) and mask operand type ({mask_type}) don\'t match'
# )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as below. I will move it to a new pull request

@superlopuh
Copy link
Member

superlopuh commented Feb 7, 2025

Hi @watermelonwolverine! I'd love to get this in, please let me know if you'd like some help with the PR. From my understanding, these bits of code that are currently in this PR would benefit from their own mini PRs:

  • is_function_of_dim + test
  • compose_with_values + test
  • used/unused_dims + test

Do you have time to have a go at these? I'm hoping it will be a fairly easy cherry-picking process, I'm happy to review each one very quickly :) After that, I think this PR should be ready to merge with minor changes, but it would be good to reduce it first.

Sorry, something went wrong.

…ead_transfer_write_ops
@watermelonwolverine
Copy link
Contributor Author

This PR is actually pretty much done from my side.
There are a few verifications I had to remove because they depend on #3654 being fixed. I plan to create a new pull request afterwards to add the verifications back in.
There are a few TODOs in verification and testing, but but I think those deserve to be their own PRs.

@watermelonwolverine
Copy link
Contributor Author

I can split this PR into smaller PRs over the next few days if it is simply too big

@superlopuh
Copy link
Member

Yeah that would be great, thank you

@superlopuh
Copy link
Member

Just opened #3947, which fixes the above issue. Would you have the time to iterate on this in the coming weeks?

@watermelonwolverine
Copy link
Contributor Author

Hi, I am very busy with my thesis, but I think I can still fit this in somehow. I'll update this PR once the other PR is merged. Should I still split this PR into smaller PRs?

@superlopuh
Copy link
Member

That would be fantastic, but also please let me know if there's another time to check in for this, there's no particular rush, it's just nice to be able to get to 0 PRs eventually.

superlopuh pushed a commit that referenced this pull request Mar 5, 2025
… AffineExpr (#4007)

Add a couple of utility functions to AffineMap and AffineExpr that deal
with dimensions.
Cherry picked from #3650, prerequisite for implementing
`vector.transfer_read` and `vector.transfer_write` ops
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dialects Changes on the dialects
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants