-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LLVM to Affine access pass #303
base: main
Are you sure you want to change the base?
Conversation
//===----------------------------------------------------------------------===// | ||
// AtAddrOp | ||
//===----------------------------------------------------------------------===// | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should instead use pointer2memref
}]> | ||
]; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ftynse @ivanradanov
Is this something upstreamable or already upstream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It says
This operation is intended solely as step during lowering, it has no side effects. A reverse operation that creates a memref from an index interpreted as a pointer is explicitly discouraged.
// CHECK-LABEL: func @test_load_store_conversion | ||
// CHECK-SAME: %[[ARG0:.*]]: !llvm.ptr<1> | ||
// CHECK-SAME: %[[ARG1:.*]]: i64 | ||
// CHECK: %[[MEMREF:.*]] = "enzymexla.ataddr"(%[[ARG0]]) {{.*}} memref<?xi8, 1> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original load is typed as i64, can we preserve that here instead of doing an vector of i8?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea to solve this issue was to have custom load op which does
affine.vector_load_ty %memref[%idx] vector<4xi8> : memref<i8x?>, i64 -> f64
which basically combines the affine.vector_load and the bitcast afterwards in one operation.
Because the current vector load + bitcast approach may lose the information about the original type alignment etc and break things.
// ----- | ||
|
||
func.func @test_struct_access(%arg0: !llvm.ptr) { | ||
%ptr = llvm.getelementptr %arg0[0, 0] : (!llvm.ptr) -> !llvm.ptr, !llvm.struct<(i64)> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it is better to apply this pass after converting all of the arguments to memrefs, that way we know the multidimensional sizes and underlying types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ever get multidimensional arrays in our compilation pipeline?
I can see a problem with correctly handling that since all of the loads inside will be with a linear index so e.g.
f(%ptr : !llvm.ptr) {
...
load %j * 4 + %i
should become this
f(%ptr : !memref<i8x10x4>) {
...
affine.load[%j][%i]
But that is legal only if we can prove that 0 <= %i < 4
Otherwise it needs to be
affine.load[%j + %i / 4][%i mod 4]
I guess we should generate the second one and try to use available information to simplify it to the first one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also run it before converting the args to memref but we should have a simplification/pattern which goes load(pointer2memref(memref2pointer(memref))) to load(memref) and adjusts the indices appropriately.
Extracted from #265. Added tests