Skip to content

ccmt-regensburg/CUED_scaling_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scaling benchmark of CUED

Short overview over the workflow

One starts the calculations (on SNG) via the benchmark_starter.py, which currently uses a distribution of fat, general and large nodes.

After these have finished, the slurm_cleanup.py, envoked in the main directory, creates additional slurm.short files in the subdirectories in order to reduce the upload size of the files (the full slurm.out as well as time- and frequency_data.dat are ignored by .gitignore due to memory constraints). Furthermore, a csv is created with the number of nodes (not cores!) and the runtime in seconds.

At last, the benchmark_plot.py reads the csv, prints the overall computational time (around 45k coreh) and creates a png and tikz file with the relative speedup against number of nodes/cores.

About

Scaling benchmark for CUED up to ~ 100k Cores

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published