Benchmark repository #1
Replies: 2 comments 2 replies
-
Folder structureProposed folder structure:
Here, the The Some possible versions to support for the libraries:
|
Beta Was this translation helpful? Give feedback.
-
Command line interfaceSpecifying the combinations of what to run will be determined by two command line parameters to the benchmark script:
DefaultsThere are two levels of defaults:
ExamplesHere are some examples to illustrate usage. The
One benefit of this way of specifying what to run is its flexibility. However it does leave room for meaningless definitions. For example: |
Beta Was this translation helpful? Give feedback.
-
Problem
There are 3 challenges I see with how we do benchmarking today.
1. Scattered benchmark infra
Currently, we have benchmarks scattered across our various repositories (preact, signals, render-to-string, others). Some of these use copies the same infra (e.g. tachometer) and scripts while others are hand written. Keeping these up to date and in-sync is becoming more difficult and time-consuming.
2. Not all Preact "styles" are covered
There are different "styles" Preact apps can be written (e.g. class components, hooks, or using compat). Today, the preact repo only has implementations that test the core library which only natively supports class components. Changes to hooks or compat won't impact our current benchmarks and so we can't get feedback on how changes to those libraries impact our performance metrics.
3. Can't compare across implementations
Building on the previous challenge, we don't have infrastructure to compare the same app built using different "styles" (e.g. hooks vs class vs compat). This challenge is particularly felt for signals, which lives in a different repository from Preact core. This lack of infra prevents us from having data to make decisions about what to keep in core or what to move our core. Perhaps bringing a feature into core would improve performance for that use case in a significant way. Or moving a feature out of core and into compat would improve core's performance in a meaningful way. Being able to get more data on those kinds of changes would be helpful.
In moving our benchmarks to their own repo, I'd like to address each of these.
Key updates
To address these problems, the 2 big changes we are making to our benchmark infrastructure are below:
1. Move benchmarks into a separate repository
This new repository will be the central place for most of our benchmarks. Individual implementation repositories (e.g. preact, signals, preact-render-to-string) can checkout this repository in their CI builds to run the benchmarks against local changes in PRs. The scripts in this repository will also provide a facility to run benchmarks against local enlistments on your machine to make local development easy.
2. Add implementation pivot to scripts
Our benchmark scripts will be updated to understand that different implementations may exist for each benchmark. So for one benchmark (e.g. 02_replace1k.html), an implementation may exist using preact classes, hooks, compat and signals. When running the benchmarks, the user can choose which versions of backing libraries (e.g. preact-master, preact-signals-local) and which implementations (signals and hooks) to run.
Beta Was this translation helpful? Give feedback.
All reactions