-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prism benchmarks #30
Prism benchmarks #30
Conversation
4ad5fe9
to
736e7cc
Compare
# Read all files in the benchmarks directory to warm up the file system cache | ||
for i in {1..10} | ||
do | ||
cat ../yjit-bench/benchmarks/**/*.rb > /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was Alex M's idea -- basically warm up the file system cache by reading the benchmark files a few times. I've found that it significantly improves the accuracy of the benchmarks on the first run!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step lowering the variance makes me think the # of times hyperfine is warming up may not be enough. It's not crucial but may be worth playing around with different warmup values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaanOzkan agreed. This is significantly faster than warmup loops, so I've done this for now, but I think it's worth reexamining the approach as we go.
736e7cc
to
f75b0da
Compare
# Read all files in the benchmarks directory to warm up the file system cache | ||
for i in {1..10} | ||
do | ||
cat ../yjit-bench/benchmarks/**/*.rb > /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
f75b0da
to
f75db58
Compare
These benchmarks will help us measure the progress of the prism in Sorbet project. While they can be run from any machine, for "official" results, they should be run on an AWS bare metal instance. Results should be added to the prism_benchmarks/data directory.
7bcf033
to
8ede117
Compare
Motivation
This PR introduces a script to run benchmarks comparing the performance of the Prism parser to the Sorbet parser. The results will be saved in JSON files, allowing us to compare the performance of the Prism and Sorbet parser at any given time, as well as over time.
Benchmark Results
Benchmark 1: yjit-bench, parser only
Benchmark 2: sorbet-test, parser only
Benchmark 3: prism regression tests, whole pipeline