-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: added bencher to track benchmarks #1367
Conversation
WalkthroughThe recent update integrates "Bencher" for benchmark tracking in the project, specifically focusing on performance metrics. This is achieved through a GitHub Actions workflow, which activates upon pushes to the main branch, excluding documentation updates. It automates the process of benchmarking by using the Bencher CLI, thereby ensuring continuous performance monitoring. Changes
Assessment against linked issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 4
.github/workflows/track_benchmarks.yml (4)
- 3-7: The workflow is configured to trigger on pushes to the main branch, excluding documentation changes. This is a good practice as it ensures that benchmark tracking is only performed on relevant code changes.
- 12-14: The workflow is granted
write
permissions forpull-requests
andcontents
. While this is necessary for the workflow to function correctly, it's important to ensure that these permissions are strictly required for the operations being performed to minimize security risks.Please confirm that the
write
permissions forpull-requests
andcontents
are strictly necessary for the operations performed by this workflow.
- 16-20: The environment variables defined here are well-structured and clearly named, which is good for maintainability and readability. However, ensure that
BASE_BENCHMARK_RESULTS
points to a file that is expected to exist or be created during the workflow run.- 22-25: Using
actions/checkout@v4
with theref
set to the pull request's head SHA is a good practice as it ensures the workflow operates on the exact commit that triggered it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- .github/workflows/track_benchmarks.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 4
.github/workflows/track_benchmarks.yml (4)
- 3-7: The trigger configuration, which excludes documentation changes and focuses on the main branch, is well thought out for a benchmark tracking workflow.
- 10-20: The job configuration, including the selection of
ubuntu-latest
as the runner and the setup of environment variables for Bencher, is appropriately configured for benchmark tracking.- 22-29: The steps for checking out code and converting cache data to JSON are correctly configured, showcasing a robust approach to preparing benchmark data for tracking.
- 31-32: The installation of Bencher CLI using its GitHub repository is a standard practice and ensures that the latest version is used for benchmark tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 3
.github/workflows/track_benchmarks.yml (3)
- 3-7: The workflow is configured to trigger on pushes to the main branch, excluding documentation changes. This is a good practice as it ensures that benchmark tracking is performed only when relevant code changes are made, avoiding unnecessary runs for documentation updates.
- 12-14: Setting
permissions
forpull-requests
andcontents
towrite
is necessary for this workflow as it likely needs to update benchmark results or related content. However, always ensure that the least privilege principle is applied to GitHub Actions to minimize security risks. If the workflow can function with more restrictive permissions, consider adjusting them accordingly.- 15-19: The use of a specific runner (
benchmarking-runner
) and the definition of environment variables for Bencher configuration are well-structured. This setup ensures that benchmarks are run in a consistent environment, which is crucial for accurate performance tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Additional comments: 3
.github/workflows/track_benchmarks.yml (3)
- 3-7: The workflow is correctly configured to trigger on pushes to the main branch, excluding documentation changes. This ensures that benchmark tracking is performed only when relevant code changes occur, optimizing resource usage.
- 12-14: Setting
pull-requests
andcontents
permissions towrite
is necessary for Bencher to update benchmark results. However, ensure that the principle of least privilege is followed and that these permissions are strictly required for the operations Bencher performs.- 15-19: The use of a specific runner (
benchmarking-runner
) and the definition of environment variables for Bencher configuration are well thought out. This setup ensures that benchmarks are run in a consistent environment, which is crucial for accurate performance tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- .github/workflows/track_benchmarks.yml (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- .github/workflows/track_benchmarks.yml
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1367 +/- ##
==========================================
+ Coverage 88.47% 88.55% +0.08%
==========================================
Files 129 129
Lines 13799 13751 -48
==========================================
- Hits 12208 12177 -31
+ Misses 1591 1574 -17 ☔ View full report in Codecov by Sentry. |
closing in favour of #1441 |
Completely get it, It was fun to know the implementation of bencher.dev . 😁 😁 |
Summary:
Added Bencher
Issue Reference(s):
Fixes #1300 1300
Build & Testing:
cargo test
successfully../lint.sh --mode=fix
to fix all linting issues raised by./lint.sh --mode=check
.Checklist:
<type>(<optional scope>): <title>
Summary by CodeRabbit