Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase adoption among packages #92

Open
milesfrain opened this issue Sep 25, 2019 · 10 comments
Open

Increase adoption among packages #92

milesfrain opened this issue Sep 25, 2019 · 10 comments

Comments

@milesfrain
Copy link
Contributor

Many packages would benefit from using PkgBenchmark.jl to catch performance regressions, and adoption would increase if the barrier for usage was lowered even further.

Wondering if any of the following ideas should be pursued:

  1. Include additional usability features. @tkf put together some helper scripts that have found use by @ericphanson and me in three packages so far. I suspect more packages will follow this trend, and there will be lots of copied boilerplate. It may be best for PkgBenchmark to absorb this functionality.
  2. Add automatic package benchmarking support to the Travis CI Julia script. It would be really nice if users could just create a valid benchmark/benchmarks.jl file and set a flag (similar to the code coverage flags) to enable package benchmarking.
  3. Curate a list of projects that implement package benchmarking effectively. Similar to how the Travis CI page has a section for example projects.
  4. Maintain a simplest possible example project that still behaves like a Julia module. I'm thinking of something like the reference SUITE, but where the functions being tested are wrapped by a module (e.g. mysin, mycos, mytan).
  5. Add a CI category in Discourse, either under Domains or Tooling.
@tkf
Copy link
Collaborator

tkf commented Sep 25, 2019

Let me also mention that it would be very useful to add Projecto.toml/Manifest.toml-integration in PkgBenchmark.jl. I think some part of the script can be simplified a lot if PkgBenchmark.jl has a simple option to handle it. I found that it can be problematic especially when the package requirements change between the target and baseline; the script may try to run in non-instantiated environment. I imagine adding an option in BenchmarkConfig() to auto-instantiate the environment wouldn't be so hard. (I guess I'll try it at some point but if anybody wants to do it that would be great.)

@DilumAluthge
Copy link
Member

This is a great discussion. I definitely would like to see more use of PkgBenchmark.jl in the Julia community.

I also wanted to share @maxbennedich‘s utilities for performance testing of Julia packages: https://github.com/maxbennedich/julia-regression-analysis. It doesn’t use PkgBenchmark.jl (it uses BenchmarkTools.jl directly), but I think there are still good tools in there that we can use. Also Max was gracious enough to MIT-license it, so we could incorporate some of its code into PkgBenchmark.jl. I really like how Max’s code makes it easy to take your existing unit test suite and turn it into a benchmark suite with very little effort.

@tkf
Copy link
Collaborator

tkf commented Jan 18, 2020

FYI, I put things together in a (unregistered) package https://github.com/tkf/BenchmarkCI.jl for running benchmarks via GitHub Actions with a minimal setup. It runs judge against origin/master and then post the result as a comment in the PR.

If you already have benchmark/benchmarks.jl, benchmark/Project.toml and benchmark/Manifest.toml, it should work out-of-the-box by just adding (say) .github/workflows/benchmark.yml with

name: Run benchmarks

on:
  pull_request:

jobs:
  Benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: julia-actions/setup-julia@latest
        with:
          version: 1.3
      - name: Install dependencies
        run: julia -e 'using Pkg; pkg"add PkgBenchmark https://github.com/tkf/BenchmarkCI.jl"'
      - name: Run benchmarks
        run: julia -e "using BenchmarkCI; BenchmarkCI.judge()"
      - name: Post results
        run: julia -e "using BenchmarkCI; BenchmarkCI.postjudge()"
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Examples:
tkf/BenchmarkCIExample.jl#1
JuliaFolds/BangBang.jl#101
JuliaFolds/Transducers.jl#160

@ericphanson
Copy link

Looks like a great setup @tkf!

I was thinking a bit more about the manifest. I agree it makes a lot of sense to check it in for benchmarking in order to be sure it’s a fair comparison. However, I’m a little worried about the maintenance burden of needing to update it so that you’re testing with the same versions that you’d likely actually be using. Could maybe the benchmarking script update the manifest in the PR after finishing the benchmark? That way it’s up to date for next time. What do you think?

@tkf
Copy link
Collaborator

tkf commented Jan 18, 2020

I just coded up the function that expects benchmark/Manifest.toml first because that's the pattern I use a lot. I don't think it's hard to make it work without benchmark/Manifest.toml (e.g., probably we can just use something like JULIA_LOAD_PATH="@:$PWD/Project.toml").

However, I’m a little worried about the maintenance burden of needing to update it so that you’re testing with the same versions that you’d likely actually be using.

I set up yet another Github Actions to update Manifest.toml everyday and push it to a PR: https://github.com/tkf/Transducers.jl/blob/master/.github/workflows/pkg-update.yml

Example PR: JuliaFolds/Transducers.jl#120

This way, it runs all CI for every update of all upstream packages. I think it's good to know exactly which version of the dependencies break the package.

@ericphanson
Copy link

Ah, good idea! The links don't seem quite right in your post, btw.

@tkf
Copy link
Collaborator

tkf commented Jan 19, 2020

Oops. I fixed the links.

@tkf
Copy link
Collaborator

tkf commented Jan 19, 2020

OK, so it turned out you can use it as-is without benchmark/Manifest.toml by just adding JULIA_LOAD_PATH: '@:.:' to GitHub Actions' env config. Example: tkf/BenchmarkCIExample.jl#5

@Arkoniak
Copy link

It may be also useful to add PkgBenchmark to PkgTemplates. It would have been very convenient to have a proper benchmark setup from the start of a new project.

@DilumAluthge
Copy link
Member

It may be also useful to add PkgBenchmark to PkgTemplates. It would have been very convenient to have a proper benchmark setup from the start of a new project.

See JuliaCI/PkgTemplates.jl#86

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants