Skip to content
This repository has been archived by the owner on Jul 8, 2024. It is now read-only.

Latest commit

 

History

History
51 lines (28 loc) · 2.17 KB

README.md

File metadata and controls

51 lines (28 loc) · 2.17 KB

ADBench - autodiff benchmarks

This project aims to provide a running-time comparison for different tools for automatic differentiation, as described in https://arxiv.org/abs/1807.10129, (source in Documentation/ms.tex). It outputs a set of relevant graphs (see Graph Archive).

At the start of the 20s, the graph for GMM (Gaussian Mixture Model, a nice "messy" workload with interesting derivatives) looked like this:

Jan 2020

For information about the layout of the project, see Development.

For information about the current status of the project, see Status.

Methodology

For explanations on how do we perform the benchmarking see Benchmarking Methodology, Jacobian Correctness Verification.

Build and Run

The easiest way to build and run the benchmarks is to use Docker. If that doesn't work for you, please, refer to our build and run guide.

Plot Results

Use ADBench/plot_graphs.py script to plot graphs of the resulting timings.

python ADBench/plot_graphs.py --save

This will save graphs as .png files to tmp/graphs/static/

Refer to PlotCreating for other possible command line arguments and the complete documentation.

Graph Archive

From time to time we run the benchmarks and publish the resulting plots here: https://adbenchwebviewer.azurewebsites.net/

The cloud infrastructure that generates these plots is described here.

Contributing

Contributions to fix bugs, test on new systems or add new tools are welcomed. See Contributing for details on how to add new tools, and Issues for known bugs and TODOs. This project has adopted the Microsoft Open Source Code of Conduct.

Known Issues

See Issues for a list of some of the known problems and TODOs.

There's GitHub's issue page as well.