Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test SDXL MLPerf inference on AMD GPU with ROCm for SCC'24 #300

Open
gfursin opened this issue Sep 26, 2024 · 4 comments
Open

Test SDXL MLPerf inference on AMD GPU with ROCm for SCC'24 #300

gfursin opened this issue Sep 26, 2024 · 4 comments
Assignees

Comments

@gfursin
Copy link
Contributor

gfursin commented Sep 26, 2024

https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24

@gfursin gfursin self-assigned this Sep 26, 2024
@gfursin
Copy link
Contributor Author

gfursin commented Sep 26, 2024

Need to provide a working configuration.

@gfursin
Copy link
Contributor Author

gfursin commented Sep 26, 2024

Hi @arjunsuresh . Which AMD GPU and ROCm version did you use to test this workflow? I would like to give it a try ... Thanks a lot!

@arjunsuresh
Copy link
Contributor

Hi @gfursin I'm not sure of the exact GPU name as it was tested by the AMD team. But any AMD GPU working with ROCm should be enough. We used ROCm 6.2 - the driver needs to be installed manually. Rest of the dependencies should be picked up by CM.

We also have the SCC24 github action and here we can also add "rocm" if we have a machine for it. https://github.com/mlcommons/cm4mlops/blob/main/.github/workflows/test-scc24-sdxl.yaml#L17

@gfursin
Copy link
Contributor Author

gfursin commented Oct 16, 2024

I just tried to run the benchmark on AMD MI300X with ROCm 6.2 and PyTorch 2.6 - it resolved all dependencies but failed in loadgen. Please see mlcommons/cm4mlperf-inference#48 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants