Skip to content

The execution time of micro benchmark is not consistent  #868

Closed
@helloguo

Description

@helloguo

I have a micro benchmark looks like this.

        [Benchmark]
        public void ScaleUPerf0() => CpuMath.Scale(DEFAULT_SCALE, dst, LEN);

        [Benchmark]
        public void ScaleUPerf1() => CpuMath.Scale(DEFAULT_SCALE, dst, LEN);

        [Benchmark]
        public void ScaleUPerf2() => CpuMath.Scale(DEFAULT_SCALE, dst, LEN);

ScaleUPerf0, ScaleUPerf1 and ScaleUPerf2 are actually testing the exactly same function. I expect the execution time of each benchmark is similar. However, when I run the benchmark, the perf data is quite different.

Method LEN Mean Error StdDev Median
ScaleUPerf0 65537 5.446 us 0.0062 us 0.0058 us 5.448 us
ScaleUPerf1 65537 7.131 us 0.0055 us 0.0046 us 7.132 us
ScaleUPerf2 65537 5.446 us 0.0081 us 0.0072 us 5.447 us

I was wondering what would be the reasons? The whole micro benchmark can be found here https://github.com/helloguo/tmp-code/tree/master/bench

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions