You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would it make sense to try to put some kind of confidence interval on the time based on all of the samples?
This is quite tricky to do correctly in the realm of non-i.i.d. statistics, which is the world benchmark timings generally live in. If you do the "usual calculations", you'll end up getting junk results a lot of the time.
A while ago, I developed a working prototype of a subsampling method for calculating p-values (which could be modified to compute confidence intervals), but it relies on getting the correct normalization coefficient for the test statistic + timing distribution (unique to each benchmark). IIRC, it worked decently on my test benchmark data, but only if I manually tuned the normalization coefficient for any given benchmark. There are methods out there for automatically estimating this coefficient, but I never got around to implementing them.
I see, so to update: currently ratio is defined for all metrics, but it just does /. We could optionally add an option for actual statistical modelling, so I think there is still room for actual hypothesis tests here.
This would compute the ratio of all the relevant metrics:
Would it make sense to try to put some kind of confidence interval on the time based on all of the samples?
The text was updated successfully, but these errors were encountered: