forked from mnaumovfb/inference
-
Notifications
You must be signed in to change notification settings - Fork 1
@dkorchevgithub adding lines suggested by @EtoDemerzel0427 to fix "g… #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
dkorchevgithub
wants to merge
518
commits into
dkorchevgithub:master
Choose a base branch
from
mlcommons:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Co-authored-by: Miro <[email protected]>
…'compliance/check.py' (#1587) Co-authored-by: mlcommons-bot <null>
* Ignore trailing whitespace lines in spl.txt files. * Remove fix from sync'ed power_checker.py. * Reformat according to black.
…#1591) * Add support to dump 10 compliance images during accuracy run for SDXL * Fix typo * Dump caption.txt in the same path
…ling_target is enabled (#1599)
* Fix loadgen token metrics latency constrains * Update perf constraints check for token metrics * Add equal issue mode for LLMs models
* Add sample length check to test06 * Remove spaces in token metrics recomendation * Add important item to Llama readme * Fix Bug: number of tokens logged before computing them * Fix typo: lenght -> length
* Enable equal issue mode for LLM benchmarks * Reduce min_query_count to 1 for server/MS/SS * Remove scenario * Remove min_query_count so default is used; revoke padding change for equal issue offline * Pad min_queries, not samples_per_query for non-offline * Add documentation to the sample equal issue
Co-authored-by: Miro <[email protected]>
* Update README.md No longer need custom fork as the relevant changes are in the inference repository * Update dataset.py --------- Co-authored-by: Miro <[email protected]>
Co-authored-by: Miro <[email protected]>
…and dlrmv2 models (#1604) * Update README.md Add CM commands to download Stable diffusion models * Update README.md * Update README.md
* Turn equal issue mode off for Llama2 TEST06 * Add TEST06 to the output dir
* Fix submission checker and TEST06 for Llama2 * Remove redundant line * Move test_dir check
…UNet) (#1624) Currently 3D-UNet is the only workload using equal-issue mode on Offline scenario. Recent code change on LLM equal-issue mode caused 3D-UNet accuracy run to run more than 1 queries, causing the accuracy log to bloat and fail the accuracy checking script. This change fixes the problem described above.
* Hotfix: DLRMv2 Audit Test01 fallback failure DLRMv2 Audit TEST01 may go to fallback route and the accuracy check script (accuracy-dlrm.py) didn't expect this to happen. It always expects entire sample set to be in the accuracy log while Audit TEST01 would generate subset only. This fixes the Audit TEST01 failure described above. * typo fix
…g low accuracy results (#1627)
* Updated final report generation script to output the list of ids being used * Update generate_final_report.py
* Fix SDXL, Retinanet and GPTJ accuracy checker
#2097) * Update submission_checker.py | Prevent empty accuracy in open division * Update test-submission-checker.yml * Fix accuracy RE for pointpainting * Support v4.1 accuracy RE for SDXL
Co-authored-by: Arjun Suresh <[email protected]>
* add docs for llama3 + inference * Update llama2-70b README.md * Update main.py
Co-authored-by: Miro <[email protected]>
* Update mlcflow commands
#2120) * Update verify_performance.py | Fix compliance test for extra percentile digit * Update verify_performance.py | Fix TEST04 for extra percentile digit * [Automated Commit] Format Codebase * Update build_wheels.yml --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* 🔄 synced local 'tools/submission/power/power_checker.py' with remote 'compliance/check.py' * 🔄 synced local 'tools/submission/power/sources_checksums.json' with remote 'compliance/sources_checksums.json' --------- Co-authored-by: mlcommons-bot <null>
) Co-authored-by: Miro <[email protected]>
* Log number of errors in detail log * [Automated Commit] Format Codebase --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* [Automated Commit] Format Codebase * Updated tags for submission checker command in docs * Update mobilenets docs * Update main.py * update dataset download commands - waymo calib (#2130) * Merge from Master (#2155) * Update submission_checker.py | Fix open model unit in Results (#2144) * Add Llama 3.1 to special unit dict (#2150) --------- Co-authored-by: Pablo Gonzalez <[email protected]> * [Automated Commit] Format Codebase * Inference docs - Update model and dataset download commands (#2153) * Update llama2 70b model download docs * changes in model and dataset download commands * add powershell command to get result folder structure (#2156) --------- Co-authored-by: ANANDHU S <[email protected]> Co-authored-by: Pablo Gonzalez <[email protected]>
…#2166) * Check all systems and measurements folders have results * Add flag to skip check all systems contain results * Update test-submission-checker.yml --------- Co-authored-by: Arjun Suresh <[email protected]>
* Add calibration check to submission checker * Update test-submission-checker.yml --------- Co-authored-by: Arjun Suresh <[email protected]>
…he dataset (#2170) Co-authored-by: Miro <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
…ood" count (mlcommons#965)
patch for the latest dlrm
updated Docker CPU as suggested in
Issues:
mlcommons#917
mlcommons#604
updated for issues DLRM inference README out of date mlcommons/inference#917 and DLRM: save downloaded and generated files outside of Docker containers mlcommons/inference#604
Update README.md
updating readme
fixed typo noticed by @psyhtest
adding lines suggested by @EtoDemerzel0427 to fix "good" count
Co-authored-by: Anton Lokhmotov [email protected]
Co-authored-by: rameshchukka [email protected]