You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tutorial steps are marked up as ARIA live regions here. This was a hack, to make sure that tutorial content is announced for blind volunteers, but it has the side effect of making the tutorial less accessible. Live regions are designed to announce single, short strings in screen readers. Structural elements (headings, paragraphs, images etc.) and interactive elements (buttons, inputs, and links) are ignored and their text content announced instead. So each step, including links and tutorial navigation buttons, is announced as a single, long string of text. There's no indication of structure, or how to navigate between steps. Embedded content, such as audio examples, is simply ignored.
The tutorial should really be marked up as a focusable dialog, with accessible links between steps (or maybe a tabbed carousel.) Blind users (and keyboard users in general) should have focus shifted to the tutorial when it opens, and back to where they were in the page when it closes again. One blind volunteer did note that the tutorial and field guide both open at the bottom of the page, after the footer, and they had to experiment by trial-and-error to realise that they had to navigate backwards through the page, in order to return to the classification task.
The original feedback for this, from a totally blind volunteer who uses NVDA, is in #5531.
See Scott O'Hara's Are We Live? for a more in-depth explanation of ARIA live regions.
The text was updated successfully, but these errors were encountered:
Panoptes-Front-End/app/classifier/tutorial.jsx
Lines 293 to 296 in 7f38b4c
Tutorial steps are marked up as ARIA live regions here. This was a hack, to make sure that tutorial content is announced for blind volunteers, but it has the side effect of making the tutorial less accessible. Live regions are designed to announce single, short strings in screen readers. Structural elements (headings, paragraphs, images etc.) and interactive elements (buttons, inputs, and links) are ignored and their text content announced instead. So each step, including links and tutorial navigation buttons, is announced as a single, long string of text. There's no indication of structure, or how to navigate between steps. Embedded content, such as audio examples, is simply ignored.
The tutorial should really be marked up as a focusable dialog, with accessible links between steps (or maybe a tabbed carousel.) Blind users (and keyboard users in general) should have focus shifted to the tutorial when it opens, and back to where they were in the page when it closes again. One blind volunteer did note that the tutorial and field guide both open at the bottom of the page, after the footer, and they had to experiment by trial-and-error to realise that they had to navigate backwards through the page, in order to return to the classification task.
The original feedback for this, from a totally blind volunteer who uses NVDA, is in #5531.
See Scott O'Hara's Are We Live? for a more in-depth explanation of ARIA live regions.
The text was updated successfully, but these errors were encountered: