EDM4eic
- EIC data model. (SeeReconstructedParticle
for most needs)EICrecon
reconstruction framework.- Benchmark repository common code - common_bench
- ROOT's
RDataFrame
See the benchmarks/Exclusive-Diffraction-Tagging/diffractive_vm
directory for a basic example. Note currently the reconstruction is far from perfect.
- Create a script that returns exit status 0 for success.
- Any non-zero value will be considered failure.
See common_bench for details.
Here we setup to use our local build of the juggler
library.
git clone https://eicweb.phy.anl.gov/EIC/benchmarks/physics_benchmarks.git && cd physics_benchmarks
git clone https://eicweb.phy.anl.gov/EIC/benchmarks/common_bench.git setup
source setup/bin/env.sh && ./setup/bin/install_common.sh
source .local/bin/env.sh && build_detector.sh
mkdir_local_data_link sim_output
mkdir -p results config
The collaboration uses the EIC group on eicweb which contains the subgroups detectors and benchmarks.
The main software components locally developed are:
EICrecon
(documentation) - Event processing framework (i.e. algorithms live)EDM4eic
- EIC data modelnpsim
- DD4hep simulation steering
The key collaboration/user code repositories are:
- epic - ePIC at IP6
- D2EIC - Detector II at IP8
- Detector benchmarks (eicweb mirror) - Set of analysis scripts run on the Geant4 output before any digitization or reconstruction. Also contains some detector calibrations.
- Physics benchmarks (eicweb mirror) - Analysis of reconstructed for physics performance. The goal is to provide metrics for optimizing detector design and reconstruction.
- Reconstruction benchmarks (legacy only)
The SWG leverages gitlab's CI/CD features heavily in our workflow. Here are some simplified explanations of these.
A pipeline is an automated set of jobs/scripts that are triggered by certain actions, such as pushing a merge request or merging into the master/main branch of a repository. Typically there is one pipeline per repository but there can multiple and a pipline can trigger downstream pipelines ("child" pipelines) or it can be triggered by an upstream pipeline. They can also be triggered manually.
The graph below show some of the downstream pipeline triggers (arrows) between different repositories.
graph TD;
epic[ePIC<br>eic/epic]-->db[Detector Benchmarks<br>eic/detector_benchmarks];
db-->rb[Reconstruction Benchmarks<br>eicweb:benchmarks/reconstruction_benchmarks];
db-->pb[Physics Benchmarks<br>eic/physics_benchmarks];
eicrecon[EICrecon<br>eic/EICrecon]-->container[EIC container/eic-shell<br>eic/container];
container-->db;
Note that on any change to the detectors will cause all the benchmarks to be run.
"OK, pipelines run automatically. What is the big deal?"
All pipeline jobs have "artifacts" which are just selected files that are saved and can be downloaded individually or as a zip file.
Note artifacts are not the output data which is far too big. Artifacts are small files such as images, plots, text files, reports, etc.
Artifacts can be browsed via the web interface, for example, the latest in reconstruction benchmark results in the summary job can be browsed.