Notice: Consider using https://github.com/lukas-weber/Carlo.jl, an updated Julia framework that does something very similar to this project. This older C++ code is provided without maintenance or support for interested people who are unable to use Julia. Feel free to fork!
loadleveller is a C++ framework for creating MPI distributed Monte Carlo simulations. It takes care of data storage and error handling and other functionalities common to all MC codes. The only thing you have to implement is the actual update and measurement code. It uses HDF5 and YAML as data storage formats and provides a python package containing helpers for launching jobs and accessing the results of simulations.
loadleveller uses the Meson build system. It is not as wide spread as CMake but has some advantages, especially on systems with old software. It can be installed via pip making it just as available as other build systems.
- MPI
- HDF5 >=1.10.1 (fallback provided)
- nlohmann_json (fallback provided)
- fmt (fallback provided)
The python package requires
- h5py
- numpy
If you don’t have meson installed install it from your distribution’s package manager or
pip3 install --user meson ninja
Then from the source directory you do
meson . build cd build ninja
You can install it using ninja install
. The recommended way to use loadleveller is to use meson for your Monte Carlo code, too. Then you can use loadleveller as a wrap dependency and not worry about this section at all.
To use loadleveller effectively you should install the python package from the python/
subdirectory
cd python python3 setup.py install --user
For details on how to implement and use your MC simulation with loadleveller see the example project. After you build it and get an executable. You need to create a job file containing the set of parameters you want to calculate. This is conveniently done with the loadleveller.taskmaker
module.
You can start it (either on a cluster batch system or locally) using the loadl run JOBFILE
command from the python package. It will calculate all the tasks defined in the jobfile in parallel. Measurements and checkpoints will be saved in the JOBFILE.data
directory. After everything is done, measurements will be averaged and merged into the human-readable JOBFILE.results.json
file. You may use the loadleveller.mcextract
module to conveniently extract results from it for further processing.
Use the loadl delete
tool to delete all job data for a fresh start after you changed something. loadl status
gives you information about the job progress. You do not have to wait until completion to get the result file. loadl merge JOBFILE
merges whatever results are already done.
Hidden features
Some handy features that are not immediately obvious.
For every observable (except evalables, which are calculated from other observables in postprocessing) loadleveller will estimate the autocorrelation times. You can see them in the results file and also extract them with python.
These times are in terms of “bins”. So if you set a bin size for your observable (visible as internal_bin_size
in the results) this is not the auto correlation time in MC sweeps.
You may notice observables starting with _ll_
in your result files. These are profiling data loadleveller collects on any run. They are all given in seconds.
It may be useful to plot them to check the performance of your method in different parameter regiemes or the ratio of time spent on sweeps and measurements. Be careful about environmental effects, especially when you are running it next to other programs.