Skip to content
This repository has been archived by the owner on May 19, 2023. It is now read-only.

Automated regular benchmarking #72

Open
Th3Whit3Wolf opened this issue Mar 14, 2021 · 12 comments
Open

Automated regular benchmarking #72

Th3Whit3Wolf opened this issue Mar 14, 2021 · 12 comments

Comments

@Th3Whit3Wolf
Copy link
Contributor

I was wondering if you have thought about setting up a github action to automatically update the README at a regular interval? Like me once a week and on pull request.

Or maybe switch all of the dependencies from gits to versions and then have the script run on every PR as long as nothing breaks. I think you can set up dependabot to automatically merge pull request as long as a ci test passes.

@djc
Copy link
Collaborator

djc commented Mar 15, 2021

Running the benchmarks on GitHub Actions might result in really variable performance, which might affect the relative results, so it doesn't seem like a non-trivial thing to setup. If you want updates to happen more often, I think the path of least resistance would be to automate the formatting of the Criterion results in the format I publish them here.

@Th3Whit3Wolf
Copy link
Contributor Author

Running the benchmarks on GitHub Actions might result in really variable performance, which might affect the relative results

What makes you say that? Github action runners use Standard_DS2_v2 virtual machines in Microsoft Azure and the rust team uses them to track performance regressions

@Th3Whit3Wolf
Copy link
Contributor Author

I've added a feature flag to build.rs that when ran looks through Cargo.toml finds all of the dependencies that aren't whitelisted and gets their description and homepage/repo from crates.io using crates_io_api (there's a lot more potentially to grab from there) and creates the markdown format links.

I have made a script that runs this, then the benchmarks, and then gets the benchmark data from /target/criterion/ and formats the data into markdown tables. The end results is below.

@Th3Whit3Wolf
Copy link
Contributor Author

Rust template engine benchmarks

This repo tries to assess Rust template engine performance. Following the
download ratings from crates.io, these nine projects are assessed:

Results

These results were produced by github actions.

As a violin plot generated by Criterion:

Big table violin plot
Teams violin plot

Numbers, as output by Criterion:

Big Table

Library Lower bound Estimate Upper bound
Askama 396.40 us 397.29 us 398.34 us
fomat 187.56 us 187.70 us 187.84 us
Handlebars 4.3653 ms 4.3748 ms 4.3912 ms
Horrorshow 264.86 us 265.87 us 266.86 us
Liquid 5.0699 ms 5.0741 ms 5.0783 ms
Markup 70.159 us 75.223 us 80.683 us
Maud 215.73 us 227.47 us 240.76 us
Ructe 749.90 us 750.31 us 750.80 us
Sailfish 26.754 us 28.762 us 30.840 us
Tera 3.4972 ms 3.5284 ms 3.5465 ms
write 354.09 us 381.41 us 409.24 us

Teams

Library Lower bound Estimate Upper bound
Askama 756.79 ns 814.72 ns 875.17 ns
fomat 493.12 ns 529.63 ns 566.09 ns
Handlebars 6.2966 us 6.7593 us 7.2227 us
Horrorshow 443.48 ns 445.23 ns 447.03 ns
Liquid 9.2885 us 9.8717 us 10.436 us
Markup 102.74 ns 104.12 ns 105.63 ns
Maud 460.82 ns 463.12 ns 465.62 ns
Ructe 761.29 ns 817.00 ns 878.03 ns
Sailfish 99.475 ns 99.552 ns 99.637 ns
Tera 5.8723 us 6.3235 us 6.7808 us
write 621.93 ns 671.17 ns 722.17 ns

Running the benchmarks

$ cargo bench

Plots will be rendered if gnuplot is installed and will be available in the target/criterion folder.

@Th3Whit3Wolf
Copy link
Contributor Author

The script could probably be made to sort the numbers in ascending order.

@djc
Copy link
Collaborator

djc commented Mar 17, 2021

Well, we can try it on a branch to see how variable the results are. Note that the initial crates list should remain ordered by popularity (crates.io recent downloads) and I think I want to stick to testing Git dependencies.

What is your use case anyway -- why is all this important to you?

@Th3Whit3Wolf
Copy link
Contributor Author

Cool beans, I'll try to get a PR submitted before next week.

May I ask why you like using git dependencies? It seems to me like a specific version is more common way to use crates and is maybe a little more fair to the crate authors.

I don't really have a use case. I browse this repo sometimes to see what's the fastest templating library right now and I thought why not automate things to keep everything up to date and while we're at why not read dependencies so when someone wants to add a new benchmark that's all they have to do and the README will update itself for them. I'm also procrastinating on a project.

@Th3Whit3Wolf
Copy link
Contributor Author

Th3Whit3Wolf commented Mar 21, 2021

I have it mostly setup here. I still need setup the cron aspect of it up but other than that it does mostly what I've discussed. I am waiting to get a PR accepted on the crates_io_api repo before I submit a PR to you.

It generates a table of all the templating libraries including their rank, name, description, recent downloads, and when it was last updated. The tables with the results are sorted by average performance now as well.

@Th3Whit3Wolf
Copy link
Contributor Author

I added a relative performance column in results tables to address #10

@utkarshkukreti
Copy link
Contributor

The README looks nice @Th3Whit3Wolf! I find the violin plots impossible to read as well. Are they useful for anyone if we have these new tables? Should they be removed @djc?

@Th3Whit3Wolf
Copy link
Contributor Author

@utkarshkukreti thank you!

@djc cron is setup now as well.

@sunng87
Copy link
Contributor

sunng87 commented Nov 19, 2021

@Th3Whit3Wolf the README looks nice unfortunately the cronjob is disabled. @djc is there any chance to merge it into this library?

Another idea is to test released version only and use dependentBot to track latest releases of each template engine.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants