Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MatMulInteger op is partially supported #2512

Closed
3 tasks done
gyulaz-htec opened this issue Dec 5, 2023 · 1 comment
Closed
3 tasks done

MatMulInteger op is partially supported #2512

gyulaz-htec opened this issue Dec 5, 2023 · 1 comment
Assignees
Labels
onnx issues related to onnx support

Comments

@gyulaz-htec
Copy link
Collaborator

gyulaz-htec commented Dec 5, 2023

MatMulInteger op can accept uint8 and int8 types as inputs, but only int8 is supported.
MatMulInteger is parsed as quant_dot which has a restriction about that inputs should be int8, but it should allow uint8 as well.
Also the a_zero_point and b_zero_point optional inputs are not handled in MIGraphX.

MatMulInteger is needed for the BERT-Squad int8 onnx zoo model.
At the moment it fails with the following (I've used the changes from this PR to pass through an unimplemented operator):
what(): /code/AMDMIGraphX/src/include/migraphx/check_shapes.hpp:210: same_type: quant_dot: Types do not match

Actions required:

  • Update MatMulInteger implementation to accept uint8 type for T1 and T2 inputs
  • Update MatMulInteger implementation to process a_zero_point and b_zero_point input tensors
  • Don't forget to enable test_matmulinteger_cpu in onnx_backend:test.py
@gyulaz-htec gyulaz-htec added the onnx issues related to onnx support label Dec 5, 2023
@gyulaz-htec gyulaz-htec self-assigned this Dec 6, 2023
@gyulaz-htec
Copy link
Collaborator Author

Fixed in #2513

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
onnx issues related to onnx support
Projects
None yet
Development

No branches or pull requests

1 participant