Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an option to save a benchmark run for later comparisons #607

Open
CosmicHorrorDev opened this issue Jan 28, 2023 · 5 comments
Open

Add an option to save a benchmark run for later comparisons #607

CosmicHorrorDev opened this issue Jan 28, 2023 · 5 comments
Labels
feature-request help wanted Extra attention is needed
Milestone

Comments

@CosmicHorrorDev
Copy link

CosmicHorrorDev commented Jan 28, 2023

There are times when I want to have some baseline saved that I compare later changes to. It would be convenient if I had a way to save a benchmarking run and then load that to later runs. Something like

$ hyperfine --save-benchmark baseline 'sleep 1'
$ hyperfine --load-benchmark baseline 'sleep 0.5'
{{ display things as though `hyperfine 'sleep 1' 'sleep 0.5'` had been run }}

If you're interested in supporting this then I can work on adding it as a feature

@sharkdp
Copy link
Owner

sharkdp commented Feb 28, 2023

That sounds like a cool idea, thank you. It also sounds like a rather complex feature that is not easy to design. So before we go ahead trying to implement this, I'd like to discuss how this would work, what kind of implications this has on other features, what the CLI would look like, etc.

Note: we already have --export-json, so that could probably be used for the "storing" side of things.

@sharkdp sharkdp added feature-request help wanted Extra attention is needed labels Feb 28, 2023
@sharkdp
Copy link
Owner

sharkdp commented Feb 28, 2023

Slightly related: #577

@miluoshi
Copy link

I would also welcome this feature. It's currently possible to save report with --export-json so could it be implemented in a way, that instead of defining one command, there would be a --import-json flag?

# before - a single run
hyperfine 'command1' 'command2'

# after - two separate runs
hyperfine 'command1' --export-json baseline.json
hyperfine --import-json baseline.json 'command2'

The position of arguments would determine the order of compared commands the same way as position of commands in the 1st example. What do you think?

@RalfJung
Copy link

In Miri we are planning to hand-implement an ad-hoc approximation of this since we can't really use the existing comparison support (rust-lang/miri#3999). Would be amazing to get proper support for this upstream. :)

@Walther
Copy link

Walther commented Dec 22, 2024

Adding a broader thought: "have you tried rubbing a database on it?"1 Could it be worthwhile for hyperfine to add support for slightly more formal, database-shaped data processing and output?

The current json output is a nice and useful feature. I've used it extensively for experiments conducted for my master's thesis. Being able to collect all the data into a single file, across a wide variety of parameters provided via --parameter-list, including combinatorics with multiple lists, is absolutely wonderful. However, analyzing the results is a bit tough. My post processing scripts are fairly involved, in order to transform the data into a more table-shaped, query-able form. Nushell has proved to be handy here, but it isn't ideal.

Currently, the in-memory presentation is the BenchmarkResult struct, and serialization is implemented fairly manually for each output format under the export directory.

Could it be beneficial to use something slightly more "database-y"?

  • With the more literal interpretation, this could be something like sqlite, redis/valkey, duckdb, postgres, or something else. However, this would be a bit of a heavy approach, and probably not be the best choice.
  • With a broader interpretation, this could be something like using Arrow for the in-memory processing and Parquet for the serialization, or some other similar options.

In the context of this issue, this could make it much easier to import data from a previous run for a direct comparison, and optionally write the new results either to a new file or append to the previous file. Additionally, this could make the analysis of the results with external tools much, much easier, by using a standardized format with a more columns-and-rows approach than json.

I understand that this could be a fairly large undertaking and require some effort. However, this could also have the potential of reducing effort in the future. A more standardized data format could help simplify the code base, removing some of the custom manual implementations in favor of ready-made library methods (in case of e.g. exports in other formats).

Sorry about the long post with broad scope 🙇‍♂️ and as always, huge thanks for making and maintaining such an incredibly useful tool 🧡

Footnotes

  1. just borrowing the funny phrase, not affiliated with the conference

@sharkdp sharkdp added this to the hyperfine 2.0 milestone Dec 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants