Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: added bencher to track benchmarks #1367

Closed
wants to merge 20 commits into from

Conversation

alankritdabral
Copy link
Contributor

@alankritdabral alankritdabral commented Mar 10, 2024

Summary:
Added Bencher

Issue Reference(s):
Fixes #1300 1300

Build & Testing:

  • I ran cargo test successfully.
  • I have run ./lint.sh --mode=fix to fix all linting issues raised by ./lint.sh --mode=check.

Checklist:

  • I have added relevant unit & integration tests.
  • I have updated the documentation accordingly.
  • I have performed a self-review of my code.
  • PR follows the naming convention of <type>(<optional scope>): <title>

Summary by CodeRabbit

  • Chores
    • Implemented a GitHub Actions workflow to track benchmarks on code pushes to the main branch, enhancing performance tracking.

Copy link
Contributor

coderabbitai bot commented Mar 10, 2024

Walkthrough

The recent update integrates "Bencher" for benchmark tracking in the project, specifically focusing on performance metrics. This is achieved through a GitHub Actions workflow, which activates upon pushes to the main branch, excluding documentation updates. It automates the process of benchmarking by using the Bencher CLI, thereby ensuring continuous performance monitoring.

Changes

File Summary
.github/workflows/track_benchmarks.yml Added workflow "Track Benchmark" to run on main branch pushes, excludes doc updates. Installs Bencher CLI and tracks benchmarks.

Assessment against linked issues

Objective Addressed Explanation
Integrate bencher for benchmarks (#1300)
Push data of older commits for the last 2-3 months (#1300) The PR does not explicitly mention backfilling data for older commits. It may require additional setup or manual intervention.

🐇🎉
To code, to code, we've added a tool,
Bencher's the name, making benchmarks cool.
On push, it runs, without a hiccup,
Tracking performance, as our code speeds up.
Here's to progress, may it never stop,
🚀 With each commit, we hop, hop, hop! 🐰
🎉🐇

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@alankritdabral alankritdabral changed the title Create track_benchmarks.yml chore: added bencher to track benchmarks Mar 10, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 2

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 5220692 and 4633338.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 4
.github/workflows/track_benchmarks.yml (4)
  • 3-7: The workflow is configured to trigger on pushes to the main branch, excluding documentation changes. This is a good practice as it ensures that benchmark tracking is only performed on relevant code changes.
  • 12-14: The workflow is granted write permissions for pull-requests and contents. While this is necessary for the workflow to function correctly, it's important to ensure that these permissions are strictly required for the operations being performed to minimize security risks.

Please confirm that the write permissions for pull-requests and contents are strictly necessary for the operations performed by this workflow.

  • 16-20: The environment variables defined here are well-structured and clearly named, which is good for maintainability and readability. However, ensure that BASE_BENCHMARK_RESULTS points to a file that is expected to exist or be created during the workflow run.
  • 22-25: Using actions/checkout@v4 with the ref set to the pull request's head SHA is a good practice as it ensures the workflow operates on the exact commit that triggered it.

.github/workflows/track_benchmarks.yml Outdated Show resolved Hide resolved
.github/workflows/track_benchmarks.yml Outdated Show resolved Hide resolved
@github-actions github-actions bot added the type: chore Routine tasks like conversions, reorganization, and maintenance work. label Mar 10, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 4633338 and 554d6e0.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/track_benchmarks.yml

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 554d6e0 and 0203820.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 4
.github/workflows/track_benchmarks.yml (4)
  • 3-7: The trigger configuration, which excludes documentation changes and focuses on the main branch, is well thought out for a benchmark tracking workflow.
  • 10-20: The job configuration, including the selection of ubuntu-latest as the runner and the setup of environment variables for Bencher, is appropriately configured for benchmark tracking.
  • 22-29: The steps for checking out code and converting cache data to JSON are correctly configured, showcasing a robust approach to preparing benchmark data for tracking.
  • 31-32: The installation of Bencher CLI using its GitHub repository is a standard practice and ensures that the latest version is used for benchmark tracking.

.github/workflows/track_benchmarks.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 0203820 and 9c2ff62.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 3
.github/workflows/track_benchmarks.yml (3)
  • 3-7: The workflow is configured to trigger on pushes to the main branch, excluding documentation changes. This is a good practice as it ensures that benchmark tracking is performed only when relevant code changes are made, avoiding unnecessary runs for documentation updates.
  • 12-14: Setting permissions for pull-requests and contents to write is necessary for this workflow as it likely needs to update benchmark results or related content. However, always ensure that the least privilege principle is applied to GitHub Actions to minimize security risks. If the workflow can function with more restrictive permissions, consider adjusting them accordingly.
  • 15-19: The use of a specific runner (benchmarking-runner) and the definition of environment variables for Bencher configuration are well-structured. This setup ensures that benchmarks are run in a consistent environment, which is crucial for accurate performance tracking.

.github/workflows/track_benchmarks.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 9c2ff62 and 538c603.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 3
.github/workflows/track_benchmarks.yml (3)
  • 3-7: The workflow is correctly configured to trigger on pushes to the main branch, excluding documentation changes. This ensures that benchmark tracking is performed only when relevant code changes occur, optimizing resource usage.
  • 12-14: Setting pull-requests and contents permissions to write is necessary for Bencher to update benchmark results. However, ensure that the principle of least privilege is followed and that these permissions are strictly required for the operations Bencher performs.
  • 15-19: The use of a specific runner (benchmarking-runner) and the definition of environment variables for Bencher configuration are well thought out. This setup ensures that benchmarks are run in a consistent environment, which is crucial for accurate performance tracking.

.github/workflows/track_benchmarks.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 538c603 and acb9306.
Files selected for processing (1)
  • .github/workflows/track_benchmarks.yml (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/track_benchmarks.yml

Copy link

codecov bot commented Mar 15, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.55%. Comparing base (1cb6fbb) to head (8b0ca8c).

❗ Current head 8b0ca8c differs from pull request most recent head f04df96. Consider uploading reports for the commit f04df96 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1367      +/-   ##
==========================================
+ Coverage   88.47%   88.55%   +0.08%     
==========================================
  Files         129      129              
  Lines       13799    13751      -48     
==========================================
- Hits        12208    12177      -31     
+ Misses       1591     1574      -17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@tusharmath
Copy link
Contributor

closing in favour of #1441
@epompeii is also the author of the library. Thanks @alankritdabral 🙏

@tusharmath tusharmath closed this Mar 16, 2024
@alankritdabral
Copy link
Contributor Author

closing in favour of #1441 @epompeii is also the author of the library. Thanks @alankritdabral 🙏

Completely get it, It was fun to know the implementation of bencher.dev . 😁 😁

@alankritdabral alankritdabral deleted the add-bencher branch March 19, 2024 01:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: chore Routine tasks like conversions, reorganization, and maintenance work.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Integrate bencher for benchmarks
2 participants