Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark process in ReadMe #31

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,16 @@
# autotune-results
Recommendations and Results from Autotune

# Methodology to generate the autotune-results
Below are the factors considered while running the benchmark in an autotune experiment.
- Repetability
- Convergence
- Reproducibility

Above factors are measured using the below process:
1. Each Autotune experiment is usually composed of 100 trials.
2. Each Trial tests a specific config from HPO.
3. Each trial runs the benchmark with multiple iterations. The benchmark container gets re-deployed at the start of each iteration.
4. Each iteration in a trial includes warmup and measurement cycles. Duration of warmup cycles is based on pre-run data from the benchmark.
5. For each trial, measure convergence in the benchmark data by calculating the confidence interval by using T-distribution for each metric.
6. Calculates the min, max, mean and percentile info for the metrics.