-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase adoption among packages #92
Comments
Let me also mention that it would be very useful to add |
This is a great discussion. I definitely would like to see more use of PkgBenchmark.jl in the Julia community. I also wanted to share @maxbennedich‘s utilities for performance testing of Julia packages: https://github.com/maxbennedich/julia-regression-analysis. It doesn’t use PkgBenchmark.jl (it uses BenchmarkTools.jl directly), but I think there are still good tools in there that we can use. Also Max was gracious enough to MIT-license it, so we could incorporate some of its code into PkgBenchmark.jl. I really like how Max’s code makes it easy to take your existing unit test suite and turn it into a benchmark suite with very little effort. |
FYI, I put things together in a (unregistered) package https://github.com/tkf/BenchmarkCI.jl for running benchmarks via GitHub Actions with a minimal setup. It runs If you already have name: Run benchmarks
on:
pull_request:
jobs:
Benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: 1.3
- name: Install dependencies
run: julia -e 'using Pkg; pkg"add PkgBenchmark https://github.com/tkf/BenchmarkCI.jl"'
- name: Run benchmarks
run: julia -e "using BenchmarkCI; BenchmarkCI.judge()"
- name: Post results
run: julia -e "using BenchmarkCI; BenchmarkCI.postjudge()"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} Examples: |
Looks like a great setup @tkf! I was thinking a bit more about the manifest. I agree it makes a lot of sense to check it in for benchmarking in order to be sure it’s a fair comparison. However, I’m a little worried about the maintenance burden of needing to update it so that you’re testing with the same versions that you’d likely actually be using. Could maybe the benchmarking script update the manifest in the PR after finishing the benchmark? That way it’s up to date for next time. What do you think? |
I just coded up the function that expects
I set up yet another Github Actions to update Manifest.toml everyday and push it to a PR: https://github.com/tkf/Transducers.jl/blob/master/.github/workflows/pkg-update.yml Example PR: JuliaFolds/Transducers.jl#120 This way, it runs all CI for every update of all upstream packages. I think it's good to know exactly which version of the dependencies break the package. |
Ah, good idea! The links don't seem quite right in your post, btw. |
Oops. I fixed the links. |
OK, so it turned out you can use it as-is without |
It may be also useful to add PkgBenchmark to PkgTemplates. It would have been very convenient to have a proper benchmark setup from the start of a new project. |
|
Many packages would benefit from using PkgBenchmark.jl to catch performance regressions, and adoption would increase if the barrier for usage was lowered even further.
Wondering if any of the following ideas should be pursued:
benchmark/benchmarks.jl
file and set a flag (similar to the code coverage flags) to enable package benchmarking.CI
category in Discourse, either under Domains or Tooling.The text was updated successfully, but these errors were encountered: