-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bencher false alarms on CI #1158
Fix bencher false alarms on CI #1158
Conversation
Bencher
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Did you get the chance to inspect how the 50/100 sample sizes make difference? should the time also be increased in the measurement_time
?
We might also in the future want to move the the env's initiated in the script
section to an env
section and that could probably allow us to use the envs like process.env.PR_BASE
across the different run's
Moving from 50 to 100 was the suggestion I got from the |
This PR updates the way branches are managed by bencher, greatly improving the possibility of getting false alarms (about performance regressions) by our CI.
Close #1051