repo containing all validations and benchmarking of minian
The scripts in this repo try to use a local SSD as intermediate folder when running both pipelines to ensure a fair comparison.
It will create a ~/var/minian-validation
directory under your home folder.
Make sure your home folder have enough space (500G+) and ideally on a SSD.
Otherwise you can change the *_INT_PATH
variables in run_*.py
scripts to point it to a better location for storing intermediate variables.
There are 3 environments needed to reproduce this repo and you need to use the correct environment for each script. You can create them like below:
conda env create -n minian-validation-generic -f environments/generic.yml
conda env create -n minian-validation-minian -f environments/minian.yml
conda env create -n minian-validation-caiman -f environments/caiman.yml
conda activate minian-validation-generic
python simulate_data_validation.py
- change the
DPATH
variable in both run_minian_simulated.py and run_caiman_simulated.py to"./data/simulated/validation"
conda activate minian-validation-minian; python run_minian_simulated.py
conda activate minian-validation-caiman; python run_caiman_simulated.py
You will need source data that is stored on figshare. We have a convenient script that will help you download all the relevant data and store them in the correct place.
conda activate minian-validation-generic; python get_data.py
conda activate minian-validation-minian; python run_minian_real.py
conda activate minian-validation-caiman; python run_caiman_real.py
conda activate minian-validation-generic; python plot_validation.py
This section will simulate datasets used for benchmarking. Since the benchmark results are csv files already stored on this repo, you can skip this and the next section if you just want to reproduce the plotting and use the benchmark results we have.
conda activate minian-validation-generic; python simulate_data_benchmark.py
- change the
DPATH
variable in both run_minian_simulated.py and run_caiman_simulated.py to"./data/simulated/benchmark"
conda activate minian-validation-minian; python run_minian_simulated.py
conda activate minian-validation-caiman; python run_caiman_simulated.py
conda activate minian-validation-generic; python plot_benchmark.py
conda activate minian-validation-generic; python simulate_data_benchmark.py
conda activate minian-validation-minian; python run_minian_tradeoff.py
conda activate minian-validation-caiman; python run_caiman_tradeoff.py
conda activate minian-validation-generic; python plot_tradeoff.py