You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In general, we don't actually know the full set of tests and subtests, just based on human-written metadata that may (very reasonably) omit PASSing tests/subtests. We'll need some way to actually guarantee that we enumerate the “full” set of tests. Ways I can think of:
Document that we rely on PASSing tests being present in metadata for correct PASS and total counts in moz-webgpu-cts triage.
I think we could enumerate the full set of tests based on the content of tests/webgpu, load the full set of tests into set data structures, and remove entries as other outcomes are discovered for those tests. This would be an “innocent until proven guilty” approach, as it were. This seems fragile, though. It also introduces some weird questions about what to do when the layout of tests doesn't match the layout of metadata. I suppose that “just” bailing out of an analysis when discovered is acceptable here, though a terrible user experience. I'd strongly prefer another approach
As for the subtest level, the only way to enumerate these would be by (1), since JS execution/execution reports is the only way to extract subtests.
Since other moz-webgpu-cts subcommands should uphold the conditions of (1), I think (1) should be the way forward.
i.e.,
1234/5678 tests passing, 1357/9135 subtests passing
beforePRIORITY
sections are printed.The text was updated successfully, but these errors were encountered: