The idea of the system is to, by using a single command, train several experiments with different model conditions and input data configurations.
For a given experiment configuration file, the main executer function can perform: training, measurement of the model prediction error on some dataset and evaluation of the model performance on a driving benchmark.
The training, prediction error, and driving benchmarks are performed simultaneusly on different process. To perform evaluation the validation and driving modules wait for the training part to produce checkpoints. All of this is controlled by the executer module.
During the execution, all the information is logged and printed on the screen to summarize the experiment training or evaluation status to be checked by a single glance of a user.