-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agenda for Feb 3 meeting #54
Comments
Under "Update on metrics" I'd like to talk about how the math was handled in 2021 scoring and how we can perhaps do better for 2022: |
@jensimmons I've added #46 to the top of the agenda, since I think that explains a lot of what's going on with Compat 2021 scoring. If there's more to discuss we can do that of course. |
Here are the notes I took: Clarify / clean-up Interop 2021 labeling Jen: Convertion of test results to score. Everything gets rounded down in the step of converting to an integer 0-20. Score "Investigate" progress as part of the overall metric Not revisiting the positions spreadsheet, it hasn’t changed. Jen: Let’s say we make “investigate” a 16th bucket. If scored 0-100%, will that be the same for all browsers? Or somewhat depends on which browser representatives have done more of the work? Philip: If we do agree to investigate and score, are we happy with the 4 in #49? Philip: If we score the investigation efforts, any constraint on what 100% means? Dashboard update
Feb 14 launch date Jen: That’s a week and a few days away. |
I have updated web-platform-tests/rfcs#99 with the proposed 90/10% split of the metric as discussed in the meeting. |
Here's the agenda for our meeting tomorrow:
interop-2022-viewport
) and viewport measurement investigation (Viewport Investigation project #41) separate for scoring.Previous meeting: #50
The text was updated successfully, but these errors were encountered: