Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark driver fixes #3

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open

Benchmark driver fixes #3

wants to merge 7 commits into from

Conversation

siddhesh
Copy link
Contributor

@siddhesh siddhesh commented Mar 1, 2019

The current timing computation has an implicit assumption that the
benchmark ran for 10 seconds when in reality it will be a bit off, by
a few microseconds, if not milliseconds. Fix this by recording the
start and end times at the entry and end points of each thread and
then use the earliest start and latest end times to get an estimate of
the total time the script iterations ran for.

Also increase the precision of the ops/s rate.

The current timing computation has an implicit assumption that the
benchmark ran for 10 seconds when in reality it will be a bit off, by
a few microseconds, if not milliseconds.  Fix this by recording the
start and end times at the entry and end points of each thread and
then use the earliest start and latest end times to get an estimate of
the total time the script iterations ran for.

Also increase the precision of the ops/s rate.
Add a BENCH_DURATION macro that allows users to increase or reduce the
benchmark execution time.
Add a new target "make run" to run all *.lua benchmarks.  Also modify
the output of the benchmark so that it is amenable to simple parsing.
@siddhesh
Copy link
Contributor Author

siddhesh commented Mar 5, 2019

I pushed some more changes to this branch since they are all related to the driver and how the benchmarks are run. In summary, these are the changes:

  • Added error checking for functions that could fail
  • Make the benchmark execution time configurable at build, to allow for longer or shorter run times
  • Add a new Makefile target 'run' to run the benchmarks

@siddhesh siddhesh changed the title Tighten up timing computation Benchmark driver fixes Mar 5, 2019
This allows building the benchmark binary against different LuaJIT
static libraries and not just the one in the submodule.  This allows
users the freedom to test and post changes without changing the state
of the submodule.
end_time_ns = tasks[i].end_ns;
}
else {
start_time_ns = MIN (start_time_ns, tasks[i].start_ns);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not entirely convinced that this method is much more accurate. Maybe the more accurate way would be to compute the average between all threads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Average will give you a throughput rate per thread, which is not incorrect, but definitely a different thing from total throughput. If you want total throughput you either get start and end time in the main thread or you get them in individual threads and get the earliest start time and latest end time. The latter eliminates some amount of the cloning and joining latency, although it shouldn't make a significant difference if the benchmark environment is controlled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants