-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for operations needed for well-known transformers e.g. Segment Anything, Stable Diffusion, etc. #478
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great work!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @wchao1115 !
Here's a nit, PTAL.
Looks like the build isn't clean any more, these issues should be fixed: |
Here's current status for baseline implementation and WPT tests of new adding ops, PTAL, thanks.
|
This CL implements MLOperand.dataType() method according to proposal [1]. This method supports querying the operand data type at runtime. This implementation names the method dataType() that aligns with the MLOperandDescriptor.dataType [2]. [1]: webmachinelearning/webnn#478 [2]: https://www.w3.org/TR/webnn/#dom-mloperanddescriptor-datatype Bug: 1273291 Change-Id: I18c19d089e310b0c24e73dbb8c028fa351782082 Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/5056627 Reviewed-by: Jiewei Qian <[email protected]> Commit-Queue: ningxin hu <[email protected]> Cr-Commit-Position: refs/heads/main@{#1228454}
@inexorabletash Fixed. |
@inexorabletash Fixed. |
+1. I think we should acknowledge Dwayne for his outstanding contribution. @wchao1115 @anssiko |
Fixed. 😊 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One functional comment added, plus some typos and wording proposals. Otherwise, given all other matters are deferrable as separate issues, tis looking good to me. Thanks Chai.
Thanks to Dwayne Robinson for his work...
Arigatou 😊.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor things, else LGTM. 😎
All the discussions in this PR have been resolved, some comments spun into separate non-blocking issues as agreed with the reviewers, and the build is clean. Great work everyone involved! @wchao1115 @huningxin please take a final look. Given this is a big PR with many micro-commits, I'd like to hear if you'd prefer to squash this. Below is my reasoning and recommendation. For the earlier big PRs #446 we decided to keep the commit history in the main branch. For this PR here it looks like a squash merge might be preferred instead since there are many micro-commit that fix small issues and typos. GitHub will retain the commit history of this PR if we keep this PR around after the squash merge so references to these commits will continue to work. The only tradeoff I'm aware of is that the local git clones of this repo won't have access to the history of this PR by default. If you agree we want to squash this PR @wchao1115 please update your extensive PR summary in #478 (comment) to reflect the latest so we can reuse it as the squash commit message. This will help future contributors who will look at the history. Looking forward to merging this soon 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks much!
Issue #375.
equal, lesser, greater, greaterOrEqual, lesserOrEqual, not, where, copy, sqrt, erf, expand, gather, layerNormalization, reduceArgMin, reduceArgMax, cast, triangular
squeeze
constant
overload that acceptsstart, end, step
.shape
method inMLOperand
for runtime tensor shape query.int64
anduint64
forreduceArgMin
andreduceArgMax
.Notes:
identity
op is namedcopy
to denote explicit action.elementwiseIf
is namedwhere
to be consistent with PyTorch and ONNX.triangularMatrix
is namedtriangular
for brevity.squeeze
is removed. Framework seekingsqueeze
,unsqueeze
,flatten
should just usereshape
. See canonical implementations of these ops asreshape
in thereshape
section.layerNormalization
op is added. There are enough subtleties in the way affine parameters (e.g. scale, bias) are expected across batchNorm, instanceNorm, and layerNorm that could become ambiguous under the unified op. Additionally, downstream implementations of these normalizations have already been done in various platforms. The unification at the WebNN level could potentially confuse framework developers and affects currently defined platform-specific optimizations.fillSequence
op is added as aconstant
overload due to its nature of resource allocation activity.Big thanks to @fdwr for the thoughtful and thorough proposal.
Preview | Diff