Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the bench-ledger-ops analysis to create a benchmark #79

Open
4 of 12 tasks
dnadales opened this issue May 11, 2023 · 1 comment
Open
4 of 12 tasks

Use the bench-ledger-ops analysis to create a benchmark #79

dnadales opened this issue May 11, 2023 · 1 comment
Assignees

Comments

@dnadales
Copy link
Member

dnadales commented May 11, 2023

Background

As part of this long-term goal, we want to elaborate a benchmarking tool that, given two Consensus versions, it compares the cost of performing the 5 main ledger operations between said versions. These 5 ledger operations are:

  1. Forecast.
  2. Header tick.
  3. Header application.
  4. Block tick.
  5. Block application.

These operations combined constitute the bulk of the time used for block adoption.

We want this tool to be usable in the development process.

Motivation

We want to provide a means for the Consensus and Ledger developers, as well as the release engineers, to be able to spot performance regressions early on.

Definition of done

Produce a tool that allows to compare the cost of the main ledger operations across two Consensus versions. This comparison can be carried out by inspecting the following artefacts produced by the tool. No automation in the detection of performance regression is required.

The tool should:

  • Allow to specify the versions of Consensus to compare.
  • Allow to specify the GHC to build a given Consensus version (to be compared)
  • Allow to specify the RTS options to run db-analyser.
  • Produce a plot per ledger operation, which shows the execution time of both versions (see this example).
  • TODO: Produce a report/table with <which values?> and <which format?>.
  • Make each report traceable by storing data like "build information".
  • Be properly documented so that other developers can use it.
  • Yield results that are consistent with the system-level benchmarks.

Additionally, we should:

  • Provide the developers with infrastructure (eg AWS instances) and data that they can use to run the benchmark comparison tool.

As future steps, we could consider running these benchmarks on CI, if that adds value.

Subtasks

@dnadales
Copy link
Member Author

#161 created a tool for comparing benchamrks. We can use that as a starting point. Additional improvements to this tool include (in no particular order):

  • Make analyseFromSlot and numBlocksToProcess optional.
  • Add support for command line argument parsing.
  • Replace A and B in the plot title with the names of versions A and B.
  • Render output data in a more legible format (eg markdown).
    • Round benchmarking metrics to two or three decimals.
  • Compute the distance between the metrics vectors (per each data point).
  • Perform statistical analysis of the outliers detected during the first benchmarking pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant