-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPU] Add check of multiple axis broadcasting in is_valid_fusion() when dynamic shape #28252
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2658,6 +2658,19 @@ bool primitive_inst::is_valid_fusion() const { | |
if (fd.is_type<eltwise>()) | ||
can_broadcast = ov::PartialShape::broadcast_merge_into(merged_shape, outer_dep_pshape, fd.typed_desc<eltwise>()->broadcast_spec); | ||
|
||
// Check if broadcast happens more than single axis. | ||
// Current FUSED_OP_LOAD macro cannot support broadcast on dynamic dimension. | ||
if (can_broadcast == true && (merged_shape.is_static() && outer_dep_pshape.is_static()) && | ||
outer_dep.first->_is_dynamic == true && merged_shape.rank().get_length() == outer_dep_pshape.rank().get_length()) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Do we need this condition? even though it is not dynamic, the fusion is not allowed isn't it? And also this function is not called for static primitive. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right. the is_valid_fusion() is called in dynamic shape case. I removed it. Thanks! |
||
uint8_t broadcast_more_than_single_axis = 0; | ||
for (int64_t i = 0; i < merged_shape.rank().get_length(); i++) { | ||
if (merged_shape.get_shape().at(i) != outer_dep_pshape.get_shape().at(i)) | ||
broadcast_more_than_single_axis++; | ||
} | ||
if (broadcast_more_than_single_axis > 1) | ||
can_broadcast = false; | ||
} | ||
|
||
#ifdef ENABLE_ONEDNN_FOR_GPU | ||
// WA for OneDNN binary add fusions: we need to broadcast batch dimension to avoid situation with | ||
// batch dimension mismatch in OneDNN tensor descriptors as follow: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these two conditions needed? At runtime, shapes should be static always.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, I removed it. Thanks!