Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove multiple archtictures to isa head and adding gemm tuning scripts #261

Merged
merged 3 commits into from
Jul 18, 2023

Conversation

scxiao
Copy link

@scxiao scxiao commented Jul 13, 2023

  1. remove the code to include multiple versions of arch features to isa head
  2. Added a script to tune gemm performance (compatible with a script outside of this repo, may refine the script name later)

Copy link

@keryell keryell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand the PR title (besides the typo to be fixed).
Or were the 2 other files not expected to be part of this PR?

@scxiao scxiao changed the title remove adding multiple archtictures to isa head remove multiple archtictures to isa head Jul 14, 2023
@scxiao
Copy link
Author

scxiao commented Jul 14, 2023

I do not understand the PR title (besides the typo to be fixed). Or were the 2 other files not expected to be part of this PR?

This PR is for the MI300X bringup and for testing the MSFT workloads. To do the testing, there is a joint (AMD/MSFT) repository that requires these two scripts. I agree the name and location of the scripts are not good, but that is required by the other repository. I will fix that (rename the script and move them to a better location) in a later PR in this repo and the other repo, so they can still work together,

@scxiao scxiao changed the title remove multiple archtictures to isa head remove multiple archtictures to isa head and adding gemm tuning scripts Jul 17, 2023
@zhanglx13
Copy link

Do we need matmul_grouped.py?

@scxiao
Copy link
Author

scxiao commented Jul 17, 2023

Do we need matmul_grouped.py?

I am not actually sure about that. Hi @alefimov-amd, what is your opinion on that?

@zhanglx13
Copy link

I asked because I saw you added it here: https://github.com/ROCmSoftwarePlatform/msft_amd_ai_operators/pull/172/files

@scxiao
Copy link
Author

scxiao commented Jul 17, 2023

I asked because I saw you added it here: https://github.com/ROCmSoftwarePlatform/msft_amd_ai_operators/pull/172/files

I see, in the script scripts/amd/gemm/tune_gemm.sh, there is the following line

DRIVER=$(echo $DRIVER | sed -e "s/matmul_grouped.py/matmul.py/g")

to replace matmul_grouped.py with matmul.py for the driver.

So everything will actually call matmul.py.

Copy link

@zhanglx13 zhanglx13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@scxiao scxiao merged commit 1c86e32 into triton-mlir Jul 18, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants