-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dev] Complete benchmark op sets of ci #100
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ability and maintainability
/run-benchmark |
/run-benchmark |
/run-benchmark |
/run-benchmark |
2 similar comments
/run-benchmark |
/run-benchmark |
/run-benchmark |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request includes several changes to improve the performance and reliability of the benchmarking process, as well as to simplify the codebase. The most significant changes include updating the GitHub Actions workflow file
.github/workflows/benchmark.yml
to better handle commit IDs, modifying the benchmarking scriptbenchmark/operators/benchmark_ops_matmul.py
to include additional test cases and refactor repetitive code, and enhancing error handling inbenchmark/operators/compare_benchmark.py
.GitHub Actions workflow improvements:
.github/workflows/benchmark.yml
: Renamed thebenchmark
job tobenchmark_base
and added a newbenchmark_compare
job to compare benchmark results between different commits. Also, changed the way commit IDs are handled, storing them in text files and uploading them as artifacts instead of using environment variables. [1] [2] [3] [4]Benchmarking script enhancements:
benchmark/operators/benchmark_ops_matmul.py
: Added new test cases and refactored theprepare_benchmark_sets
method to reduce code repetition. Also, changed thelegalize_shape
method to use the key "m" instead of "M" in thedyn_prof_shape
dictionary. [1] [2]Error handling improvements:
benchmark/operators/compare_benchmark.py
: Added error handling for cases where certain operators are not found in the benchmark sets. Also, added a print statement to display the base and head commits being compared. [1] [2]Code simplification:
bitblas/base/utils.py
: Removed theprofile_tensors
attribute from theCompileResult
class and changed theprofile
method to calculate latency directly. Also, updated theapply_and_build_parallel
function to not useprofile_tensors
. [1] [2] [3] [4]Cleanup operations:
bitblas/ops/general_matmul/__init__.py
: Added acleanup
method to free up workspace memory after use. [1] [2]bitblas/ops/operator.py
: Removed theprofile_tensors
attribute from theOperatorBase
class and updated theget_profile_tensors
method to release memory after use. Also, added acleanup
method to be implemented by subclasses. [1] [2] [3]