Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Track benchmarks with Bencher #1725

Merged
merged 1 commit into from
Apr 15, 2024
Merged

Conversation

epompeii
Copy link
Contributor

@epompeii epompeii commented Apr 14, 2024

Summary:
@tusharmath these changes move Tailcall over to using Bencher for tracking both the main branch micro-benchmarks and running micro-benchmarks on pull requests that have the label ci: benchmark. The results for PR runs will be posted as a comment on the pull request.

The macro-benchmarks are not yet moved over until there is a wrk adapter added to Bencher: bencherdev/bencher#347

In order for things to work in the tailcallhq/tailcall repo, you will need to have the BENCHER_API_TOKEN added as a Repository secret. I believe this is already in place from #1441, but I just wanted to make sure 😃

I have gone ahead and created a Threshold for the main branch, the benchmarking-runner testbed, and the Latency measure (used by Criterion): https://bencher.dev/perf/tailcall/thresholds/1ff7a58c-8add-4a72-a759-bdeccbf6ffa1
This will be used to detect performance regressions on the main branch, and it will be cloned and used for all pull requests.
Feel free to reconfigure this Threshold to suite you all's needs.

This is an overview of the benchmarking GitHub Actions files:

  • benchmark_main.yml: Runs the micro-benchmarks on pushes to the main branch
  • benchmark_pr_run.yml: Runs the micro-benchmarks on PRs with the tag ci: benchmark and caches the results
  • benchmark_pr_track.yml: Posts the micro-benchmark cached results to the PR as a comment
  • benchmark.yml: Runs the macro-benchmarks on pushes to main and PRs with the ci: benchmark tag
  • benchmark_comment.yml: Posts the macro-benchmark results to the commit as a comment

The documentation for the strategy used to track micro-benchmarks for PRs in GitHub Actions can be found there: https://bencher.dev/docs/how-to/github-actions/#benchmark-fork-pr-and-upload-from-default-branch

Issue Reference(s):
Relates to: #436
Implements: #1300
Fixes: #1441

Build & Testing:

Testing this out requires changing the default branch for the repo, as required by workflow_run, and using ubuntu-latest as the runs-on testbed.
I did this in my fork using the bencher_main branch: https://github.com/epompeii/tailcall/tree/bencher_main
I also created a separate branch to test out creating a PR: https://github.com/epompeii/tailcall/tree/bencher_retry
This is the example pull request: epompeii#1
Which now includes a pull request comment made by Bencher with the micro-benchmark results: epompeii#1 (comment)

Again, my apologies for the hubris to not test things out the first time around. 🤦🏽‍♂️

  • I ran cargo test successfully.
  • I have run ./lint.sh --mode=fix to fix all linting issues raised by ./lint.sh --mode=check.

Checklist:

  • I have added relevant unit & integration tests.
  • I have updated the documentation accordingly.
  • I have performed a self-review of my code.
  • PR follows the naming convention of <type>(<optional scope>): <title>

Summary by CodeRabbit

  • Refactor
    • Renamed and streamlined GitHub Actions workflows related to benchmarking.
  • Chores
    • Removed specific benchmarking jobs to optimize workflow performance.
  • New Features
    • Improved benchmarking processes for pull requests and main branch commits, enhancing feedback on performance changes.
  • Documentation
    • Updated job names and descriptions for clarity and consistency.

Copy link
Contributor

coderabbitai bot commented Apr 14, 2024

Walkthrough

Walkthrough

The recent updates to GitHub workflows for a Rust project aim to streamline benchmarking processes. These changes involve job renaming, removal of specific jobs, and the addition of workflows dedicated to main branch and pull request benchmarks. The overall goal is to enhance clarity and efficiency in managing and analyzing benchmark results.

Changes

File Path Change Summary
.github/workflows/benchmark.yml Renamed job to macro_benchmarks, removed caching and comparison jobs.
.github/workflows/benchmark_comment.yml Job renamed to "Benchmark comment on commit", updated job ID to "macro_benchmarks_comment".
.github/workflows/benchmark_main.yml New workflow for running micro benchmarks on main branch pushes, excluding doc changes.
.github/workflows/benchmark_pr_run.yml New workflow triggered by PRs labeled 'ci: benchmark', includes result saving and uploading.
.github/workflows/benchmark_pr_track.yml Tracks and analyzes benchmarks for pull requests using downloaded results and PR data.

Recent Review Details

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between cec36b4 and 082c684.
Files selected for processing (5)
  • .github/workflows/benchmark.yml (2 hunks)
  • .github/workflows/benchmark_comment.yml (2 hunks)
  • .github/workflows/benchmark_main.yml (1 hunks)
  • .github/workflows/benchmark_pr_run.yml (1 hunks)
  • .github/workflows/benchmark_pr_track.yml (1 hunks)
Files skipped from review as they are similar to previous changes (5)
  • .github/workflows/benchmark.yml
  • .github/workflows/benchmark_comment.yml
  • .github/workflows/benchmark_main.yml
  • .github/workflows/benchmark_pr_run.yml
  • .github/workflows/benchmark_pr_track.yml

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Apr 15, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 87.17%. Comparing base (cec36b4) to head (082c684).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1725      +/-   ##
==========================================
- Coverage   87.18%   87.17%   -0.01%     
==========================================
  Files         149      149              
  Lines       15451    15451              
==========================================
- Hits        13471    13470       -1     
- Misses       1980     1981       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@tusharmath tusharmath added the ci: benchmark Runs benchmarks label Apr 15, 2024
@tusharmath tusharmath enabled auto-merge (squash) April 15, 2024 08:01
@tusharmath tusharmath merged commit 5e9b997 into tailcallhq:main Apr 15, 2024
33 of 34 checks passed
ssddOnTop pushed a commit that referenced this pull request May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci: benchmark Runs benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants