[ Back to index ]
Click here to see the table of contents.
The MLCommons task force on automation and reproducibility is developing an open-source Collective Knowledge platform to make it easier for the community to run, visualize and optimize MLPerf benchmarks out of the box across diverse software, hardware, models and data.
This tutorial demonstrates how to run and/or reproduce MLPerf training benchmark with the help of the MLCommons CM automation language.
If you have any questions about this tutorial, please get in touch via our public Discord server or open a GitHub issue here.
Follow this guide to install the MLCommons CM automation language on your platfom.
We have tested this tutorial with Ubuntu 20.04 and Windows 10.
To be continued ...
Follow this guide
You can visualize and compare MLPerf results here. You can use this collaborative platform inside your organization to reproduce and optimize benchmarks and applications of your interest.
Please join the MLCommons task force on automation and reproducibility to get free help to automate and optimize MLPerf benchmarks for your software and hardware stack using the MLCommons CM automation language!