From 42428f72b9936989bb9a31bca01e79e3a9f64c60 Mon Sep 17 00:00:00 2001 From: kusumachalasani Date: Tue, 31 May 2022 14:00:32 +0530 Subject: [PATCH] Add benchmark process --- README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/README.md b/README.md index 7d4ca026..49621ef9 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,16 @@ # autotune-results Recommendations and Results from Autotune + +# Methodology to generate the autotune-results +Below are the factors considered while running the benchmark in an autotune experiment. +- Repetability +- Convergence +- Reproducibility + +Above factors are measured using the below process: +1. Each Autotune experiment is usually composed of 100 trials. +2. Each Trial tests a specific config from HPO. +3. Each trial runs the benchmark with multiple iterations. The benchmark container gets re-deployed at the start of each iteration. +4. Each iteration in a trial includes warmup and measurement cycles. Duration of warmup cycles is based on pre-run data from the benchmark. +5. For each trial, measure convergence in the benchmark data by calculating the confidence interval by using T-distribution for each metric. +6. Calculates the min, max, mean and percentile info for the metrics.