Skip to content

TTS QASP

Sarah Statz edited this page Apr 6, 2021 · 29 revisions

Isn't this a pretty table?!

Deliverable Performance Standards Acceptable Quality Level Method of Assessment What does it mean for our team?
Tested Code Code delivered under the order must have substantial test code coverage and a clean code base Version-controlled Court GitHub repository of code that comprises product that will remain in the government domain Minimum of 90% test coverage of all code Combination of manual review and automated testing If requested, we are able to prove a minimum of 90% test coverage. The team is responsible for manual testing and it is part of our Definition of Done.
Properly Styled Code GSA 18F Front End Guide 0 linking errors and 0 warnings Combination of manual review and automated testing Code follows 18F guidelines for properly styled code.
Accessible Web Content Accessibility Guidelines 2.1 AA (WCAG 2.1 AA) standards 0 errors reported for WCAG 2.1 AA standards using an automated scanner and 0 errors reported in manual testing CodeSniffer or pa11y All web content is accessible. We perform a11y testing through the use of tools, which typically catch 30% of a11y issues, & manual testing, which covers the other 70%. We consider a11y in the design choices we make.
Deployed Code must successfully build and deploy into staging environment Successful build with a single command Combination of manual review and automated testing Code successfully builds with a single command. A staging environment is required & utilized to verify changes in a non-prod environment.
Secure OWASP Application Security Verification Standard 3.0 Code submitted must be free of medium- and high-level static and dynamic security vulnerabilities Clean tests from a static testing SaaS (such as Gemnasium) and from OWASP ZAP, along with documentation explaining any false positives Dependabot is turned on for our repo. We utilize a dynamic scanner. This is part of our Definition of Done.
User Research Usability testing and other user research methods must be conducted at regular intervals throughout the development process (not just at the beginning or end). Research plans and artifacts from usability testing and/or other research methods with end-users are available at the end of every applicable sprint, in accordance with the vendor’s research plan. TTS will evaluate the artifacts based on a research plan provided by the vendor at the end of the second sprint and every applicable sprint thereafter. For each project, we create research plans before performing user research in order to plan out what we are trying to learn from the interaction with a user. Research plans exist for interviews & surveys.

TTS QASP for Software Development - Design and Research PDF

A New QASP in the Works... New Draft QASP