Skip to content
This repository has been archived by the owner on Jul 8, 2024. It is now read-only.

Latest commit

 

History

History
90 lines (67 loc) · 5.43 KB

Architecture.md

File metadata and controls

90 lines (67 loc) · 5.43 KB

ADBench Architecture

Overview

On the highest level ADBench consists of

  • Global runner
  • Benchmark runners
  • Automatic differentiation (AD) framework testing modules
  • Result-processing scripts

Testing modules here are modules in terms of the platform they are developed for (e.g shared objects for C++, assemblies for .NET (Core), .py files in Python) that perform AD using the frameworks being tested.

Benchmark runners are console applications that load testing modules and input data and measure the time modules take to compute the objective function and its considered derivative for the loaded input data. Then they write measured times and computed derivatives to files with standardized names. We have one benchmark runner per development platform, so that we can use the same time-measuring code for all frameworks supporting that platform.

Global runner is a script that is aware of all existing benchmark runners, testing modules, and sets of input parameters for the objective functions. It consecutively runs all benchmarks using corresponding benchmark runners while enforcing specified hard time limits. After every benchmark it checks the accuracy of the computed derivatives.

Result-processing scripts are the scripts that consume the outputs of the benchmark runners and somehow process them, e.g. create visualizations.

A diagram of relationships between the modules and the runners:

manual manual (eigen) finite ... Autograd PyTorch Tensorflow ... DiffSharp ... Zygote ...
C++ runner Python runner .NET runner Julia runner
Global runner

Interfaces

Testing Modules

As mentioned above, testing modules are modules in terms of the platform they are developed for. Their responsibility is to repeatedly compute the objective function or perform AD using the tested framework. They do not perform any I/O or time measurements - those are responsibilities of the benchmark runners.

Their interfaces are defined strictly within the specifications of the corresponding runners, but generally would contain functions for

  • Converting the input data from the format in which it is provided by the calling benchmark runner into the format optimized for use with the tested AD framework
  • Repeatedly computing one of the objective functions given number of times, saving results into a pre-allocated internal structure optimized for use with the tested AD framework
  • Repeatedly computing a derivative of one of the objective functions given number of times, saving results into a pre-allocated internal structure optimized for use with the tested AD framework
  • Converting internally saved outputs into the format specified by the runner

Benchmark Runners

Benchmark runners are console applications that load testing modules and input data and measure the time modules take to compute the objective function and its considered derivative for the loaded input data. Time measurement is performed according to the methodology. Benchmark runners are started by the global runner. They write measured times and computed derivatives to files, that are then read by the result-processing scripts.

Interfaces for interacting with the testing modules are specific to each runner and, therefore, described in the specifications for the runners. The broad description is given in the section regarding testing modules.

Global runner tells benchmark runners which testing module to use, which test to run, etc. via command-line arguments. The exact invocation specifications are specific to the benchmark runners and known to the global runner. Generally, the arguments would include

  • path to the testing module
  • path to the input data
  • path to the folder where the output should be saved
  • name of the objective function
  • some of the variables listed in the methodology

Benchmark runners output 3 files into the folder specified by the global runner. These files are

  • <name of the input>_times_<name of the testing module>.txt - new line-separated timings for the computation of the objective function and the derivative.
  • <name of the input>_F_<name of the testing module>.txt - new line-separated values of the objective function computed by the module.
  • <name of the input>_J_<name of the testing module>.txt - values of the derivative computed by the module. Exact format is specific to the objective function. See FileFormat for details.

Global Runner

Global runner is a script that runs all benchmarks using corresponding benchmark runners while enforcing specified hard time limits. It contains code for starting all the benchmark runners and is aware of all existing testing modules.

Global runner is started by the user. User may pass some or all of the variables listed in the methodology as command line arguments.

For the complete documentation refer to GlobalRunner.md.

Result-Processing Scripts

Scripts that consume the outputs of the benchmark runners and process them. Currently we have the following scripts: