Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve CSV reporting of benchmarks, add stencil system benchmark #221

Merged
merged 2 commits into from
Oct 24, 2023

Conversation

PeterTh
Copy link
Contributor

@PeterTh PeterTh commented Oct 23, 2023

This PR is in preparation of a follow-up which will use tags in the evaluation for categorization.
This needs to be done in 2 steps due to the involvement of the repository state in the CI benchmark worklow.

Preparatory changes:

  • Outputs the tag list in to the CSV reporter.
  • Changes the script to be resilient to csv layout/order changes.
  • Unifies tagging in benchmarks.
  • Adds another stencil-like system benchmark.

The last point might seem a bit unrelated. I added it because otherwise, our "full system" benchmark category would include only the many-task benchmark, which is quite far from Celerity's common use case currently.
We always wanted to extend this set anyway, so now I made a first step with a very simple iterative stencil benchmark that should better represent some common usage scenarios in terms of runtime system performance impact.

The goal is to be able to finish https://github.com/celerity/meta/issues/47 in the follow-up.

@PeterTh PeterTh force-pushed the improved-benchmark-ci branch from cd72912 to 8329cc2 Compare October 24, 2023 10:11
@github-actions
Copy link

Check-perf-impact results: (e3a4351784531e1861bff0cb14c92210)

⚠️ Significant slowdown in some microbenchmark results: 16 individual benchmarks affected
🚀 Significant speedup in some microbenchmark results: 16 individual benchmarks affected
Added microbenchmark(s): benchmark stencil pattern with N time steps - 50 / iterations, benchmark stencil pattern with N time steps - 1000 / iterations

Overall relative execution time: 1.01x (mean of relative medians)

@celerity celerity deleted a comment from github-actions bot Oct 24, 2023
@PeterTh
Copy link
Contributor Author

PeterTh commented Oct 24, 2023

This goes to show that the threshold of "significance" also needs to be tweaked in the follow-up PR which reworks the reporting script.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

test/system_benchmarks.cc Show resolved Hide resolved
Copy link
Member

@psalz psalz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

test/system_benchmarks.cc Outdated Show resolved Hide resolved
test/system_benchmarks.cc Outdated Show resolved Hide resolved
test/system_benchmarks.cc Outdated Show resolved Hide resolved
@PeterTh PeterTh force-pushed the improved-benchmark-ci branch 2 times, most recently from f9e27e1 to cd3d1ad Compare October 24, 2023 13:03
* Unify/standardize a few tags
* Update ruby perf impact script to use names instead of column indices
@PeterTh PeterTh force-pushed the improved-benchmark-ci branch from cd3d1ad to efd0fb2 Compare October 24, 2023 13:21
@celerity celerity deleted a comment from github-actions bot Oct 24, 2023
@PeterTh PeterTh merged commit 5cc76fa into master Oct 24, 2023
28 checks passed
@PeterTh PeterTh deleted the improved-benchmark-ci branch October 24, 2023 14:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants