-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an option to save a benchmark run for later comparisons #607
Comments
That sounds like a cool idea, thank you. It also sounds like a rather complex feature that is not easy to design. So before we go ahead trying to implement this, I'd like to discuss how this would work, what kind of implications this has on other features, what the CLI would look like, etc. Note: we already have |
Slightly related: #577 |
I would also welcome this feature. It's currently possible to save report with # before - a single run
hyperfine 'command1' 'command2'
# after - two separate runs
hyperfine 'command1' --export-json baseline.json
hyperfine --import-json baseline.json 'command2' The position of arguments would determine the order of compared commands the same way as position of commands in the 1st example. What do you think? |
In Miri we are planning to hand-implement an ad-hoc approximation of this since we can't really use the existing comparison support (rust-lang/miri#3999). Would be amazing to get proper support for this upstream. :) |
Adding a broader thought: "have you tried rubbing a database on it?"1 Could it be worthwhile for The current Currently, the in-memory presentation is the Could it be beneficial to use something slightly more "database-y"?
In the context of this issue, this could make it much easier to import data from a previous run for a direct comparison, and optionally write the new results either to a new file or append to the previous file. Additionally, this could make the analysis of the results with external tools much, much easier, by using a standardized format with a more columns-and-rows approach than I understand that this could be a fairly large undertaking and require some effort. However, this could also have the potential of reducing effort in the future. A more standardized data format could help simplify the code base, removing some of the custom manual implementations in favor of ready-made library methods (in case of e.g. exports in other formats). Sorry about the long post with broad scope 🙇♂️ and as always, huge thanks for making and maintaining such an incredibly useful tool 🧡 Footnotes
|
There are times when I want to have some baseline saved that I compare later changes to. It would be convenient if I had a way to save a benchmarking run and then load that to later runs. Something like
If you're interested in supporting this then I can work on adding it as a feature
The text was updated successfully, but these errors were encountered: