You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MatMulInteger op can accept uint8 and int8 types as inputs, but only int8 is supported.
MatMulInteger is parsed as quant_dot which has a restriction about that inputs should be int8, but it should allow uint8 as well.
Also the a_zero_point and b_zero_point optional inputs are not handled in MIGraphX.
MatMulInteger is needed for the BERT-Squad int8 onnx zoo model.
At the moment it fails with the following (I've used the changes from this PR to pass through an unimplemented operator): what(): /code/AMDMIGraphX/src/include/migraphx/check_shapes.hpp:210: same_type: quant_dot: Types do not match
Actions required:
Update MatMulInteger implementation to accept uint8 type for T1 and T2 inputs
Update MatMulInteger implementation to process a_zero_point and b_zero_point input tensors
MatMulInteger op can accept uint8 and int8 types as inputs, but only int8 is supported.
MatMulInteger is parsed as quant_dot which has a restriction about that inputs should be int8, but it should allow uint8 as well.
Also the
a_zero_point
andb_zero_point
optional inputs are not handled in MIGraphX.MatMulInteger is needed for the BERT-Squad int8 onnx zoo model.
At the moment it fails with the following (I've used the changes from this PR to pass through an unimplemented operator):
what(): /code/AMDMIGraphX/src/include/migraphx/check_shapes.hpp:210: same_type: quant_dot: Types do not match
Actions required:
a_zero_point
andb_zero_point
input tensorsThe text was updated successfully, but these errors were encountered: