Skip to content

lac-dcc/benchMetrics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 

Repository files navigation

Benchmarks Metrics

This repository contains both dynamic and static data mined from the LLVM Test Suite and SPEC CPU programs. For each benchmark collection, Perf Stat counter metrics and Milepost Features were collected and categorized based on the CLANG optimization flags used to compile the benchmarks. The optimization flags used were: -O0, -O1, -O2, and -O3. The programs of SPEC CPU were run using the 'train' size with one iteration.

Linux Perf

Linux Perf is a suite of performance monitoring tools available on Linux systems, enabling users to collect and analyze detailed performance data. The tool used to extract our data is perf stat, which uses Performance Monitoring Units (PMUs) to record hardware events during code execution.

GCC Milepost Features

Milepost is a set of 56 program features, introduced by Fursin et al. These features include number of basic blocks, number of edges in the control flow graph, number of function calls, etc (see Table 2 in the original paper). To collect the features, we have used the MilePost implementation available in the YaCoS compiler. The data that we report in this repository is the same data that we have used in two other papers to explore the space of compiler optimizations in clang, by Faustino et al and by Damasio et al.

Environment Details

Hardware

  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
  • CPU Cores Number: 10
  • Threads Number: 20
  • L1 Cache Size: 10 x 32 KB instruction caches and 10 x 32 KB data caches
  • L2 Cache Size: 10 x 256 KB
  • L3 Cache Size: 25 MB
  • RAM: 32GB

Software

  • Operating System: Ubuntu 20.04.6 LTS
  • Python Version: 3.8.107
  • SPEC CPU: 2017
  • LLVM Build: 17.0.6
  • Clang/Clang++: 17.0.6
  • LLVM Test-Suite Build:: Release 17
  • Linux Perf: 5.4.269

External Links

References

  1. Thaís Damásio, Michael Canesche, Vinícius Pacheco, Marcus Botacin, Anderson Faustino da Silva, Fernando Magno Quintão Pereira: A Game-Based Framework to Compare Program Classifiers and Evaders. CGO 2023: 108-121 -- This paper uses MilePost features to find out if two programs solve the same task.

  2. Anderson Faustino da Silva, Edson Borin, Fernando Magno Quintão Pereira, Nilton Luiz Queiroz Junior, Otávio Oliveira Napoli: Program representations for predictive compilation: State of affairs in the early 20's. J. Comput. Lang. 73: 101171 (2022) -- This paper compares the precision of the MilePost features against other program embeddings in various different tasks.

  3. Grigori Fursin, Yuriy Kashnikov, Abdul Wahid Memon, Zbigniew Chamski, Olivier Temam, Mircea Namolaru, Elad Yom-Tov, Bilha Mendelson, Ayal Zaks, Eric Courtois, François Bodin, Phil Barnard, Elton Ashton, Edwin V. Bonilla, John Thomson, Christopher K. I. Williams, Michael F. P. O'Boyle: Milepost GCC: Machine Learning Enabled Self-tuning Compiler. Int. J. Parallel Program. 39(3): 296-327 (2011) -- The original description of the MilePost features (see Table 2).

  4. André Felipe Zanella, Anderson Faustino da Silva, Fernando Magno Quintão Pereira: YACOS: a Complete Infrastructure to the Design and Exploration of Code Optimization Sequences. SBLP 2020: 56-63 -- This paper describes the implementation of the LLVM pass that we have used to collect the MilePost features.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages