-
-
Notifications
You must be signed in to change notification settings - Fork 29
Automated regular benchmarking #72
Comments
Running the benchmarks on GitHub Actions might result in really variable performance, which might affect the relative results, so it doesn't seem like a non-trivial thing to setup. If you want updates to happen more often, I think the path of least resistance would be to automate the formatting of the Criterion results in the format I publish them here. |
What makes you say that? Github action runners use Standard_DS2_v2 virtual machines in Microsoft Azure and the rust team uses them to track performance regressions |
I've added a feature flag to build.rs that when ran looks through Cargo.toml finds all of the dependencies that aren't whitelisted and gets their description and homepage/repo from crates.io using crates_io_api (there's a lot more potentially to grab from there) and creates the markdown format links. I have made a script that runs this, then the benchmarks, and then gets the benchmark data from |
Rust template engine benchmarksThis repo tries to assess Rust template engine performance. Following the
ResultsThese results were produced by github actions. As a violin plot generated by Criterion: Numbers, as output by Criterion: Big Table
Teams
Running the benchmarks$ cargo bench Plots will be rendered if |
The script could probably be made to sort the numbers in ascending order. |
Well, we can try it on a branch to see how variable the results are. Note that the initial crates list should remain ordered by popularity (crates.io recent downloads) and I think I want to stick to testing Git dependencies. What is your use case anyway -- why is all this important to you? |
Cool beans, I'll try to get a PR submitted before next week. May I ask why you like using git dependencies? It seems to me like a specific version is more common way to use crates and is maybe a little more fair to the crate authors. I don't really have a use case. I browse this repo sometimes to see what's the fastest templating library right now and I thought why not automate things to keep everything up to date and while we're at why not read dependencies so when someone wants to add a new benchmark that's all they have to do and the README will update itself for them. I'm also procrastinating on a project. |
I have it mostly setup here. I still need setup the cron aspect of it up but other than that it does mostly what I've discussed. I am waiting to get a PR accepted on the crates_io_api repo before I submit a PR to you. It generates a table of all the templating libraries including their rank, name, description, recent downloads, and when it was last updated. The tables with the results are sorted by average performance now as well. |
I added a relative performance column in results tables to address #10 |
The README looks nice @Th3Whit3Wolf! I find the violin plots impossible to read as well. Are they useful for anyone if we have these new tables? Should they be removed @djc? |
@utkarshkukreti thank you! @djc cron is setup now as well. |
@Th3Whit3Wolf the README looks nice unfortunately the cronjob is disabled. @djc is there any chance to merge it into this library? Another idea is to test released version only and use dependentBot to track latest releases of each template engine. |
I was wondering if you have thought about setting up a github action to automatically update the README at a regular interval? Like me once a week and on pull request.
Or maybe switch all of the dependencies from gits to versions and then have the script run on every PR as long as nothing breaks. I think you can set up dependabot to automatically merge pull request as long as a ci test passes.
The text was updated successfully, but these errors were encountered: