-
Notifications
You must be signed in to change notification settings - Fork 522
Handle benchmark configs when extracting benchmark results #7433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7433
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 2 Unrelated FailuresAs of commit c33f815 with merge base 82763a9 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@huydhn has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
ExecuTorch has replaced the backend field with a more generic benchmark configs concept, so the dashboard will display it instead. * pytorch/executorch#7349 * pytorch/executorch#7433 ### Testing https://torchci-git-fork-huydhn-add-executorch-backend-fbopensource.vercel.app/benchmark/llms?startTime=Tue%2C%2017%20Dec%202024%2019%3A05%3A31%20GMT&stopTime=Tue%2C%2024%20Dec%202024%2019%3A05%3A31%20GMT&granularity=hour&lBranch=handle-benchmark-config-dashboard&lCommit=c48da2bd2c9a32705db9b1adf638344474c275a4&rBranch=handle-benchmark-config-dashboard&rCommit=c33f815eff17c8890f2c8527dc0f0dbca50b4397&repoName=pytorch%2Fexecutorch&modelName=All%20Models&backendName=All%20Backends&dtypeName=All%20DType&deviceName=All%20Devices
Once this lands, we can wait for a week for old data to go out of focus on http://localhost:3000/benchmark/llms?repoName=pytorch%2Fexecutorch. |
@huydhn It looks like test-spec specialization starts failing last Fri since this PR merged. For example: https://github.com/pytorch/executorch/actions/runs/12540463242/job/34967784456. |
Oh, silly me. This failed for |
The benchmark configs JSON are uploaded as artifacts and can be used later to populate the benchmark results with the model name and configs before uploading to the database.
Testing
https://torchci-git-fork-huydhn-add-executorch-backend-fbopensource.vercel.app/benchmark/llms?startTime=Tue%2C%2017%20Dec%202024%2019%3A05%3A31%20GMT&stopTime=Tue%2C%2024%20Dec%202024%2019%3A05%3A31%20GMT&granularity=hour&lBranch=handle-benchmark-config-dashboard&lCommit=c48da2bd2c9a32705db9b1adf638344474c275a4&rBranch=handle-benchmark-config-dashboard&rCommit=c33f815eff17c8890f2c8527dc0f0dbca50b4397&repoName=pytorch%2Fexecutorch&modelName=All%20Models&backendName=All%20Backends&dtypeName=All%20DType&deviceName=All%20Devices