-
Notifications
You must be signed in to change notification settings - Fork 0
TTS QASP
Sarah Statz edited this page Apr 7, 2021
·
29 revisions
The government uses the QASP to monitor the quality of a vendor’s deliverables and performance. This helps the government measure vendor performance and ensure it reaches the required levels as specified in the contract. The QASP provides the government with a proactive way to identify and avoid deficient performance. The following chart sets forth the performance standards; quality levels the code and documentation provided by the Contractor must meet, and the methods [the agency] will use to assess the standard and quality levels of that code and documentation.
Deliverable | Performance Standards | Acceptable Quality Level | Method of Assessment | What does it mean for our team? |
---|---|---|---|---|
Tested Code | Code delivered under the order must have substantial test code coverage and a clean code base Version-controlled Court GitHub repository of code that comprises product that will remain in the government domain | Minimum of 90% test coverage of all code | Combination of manual review and automated testing | If requested, we are able to prove a minimum of 90% test coverage. The team is responsible for manual testing and it is part of our Definition of Done. Tested Code |
Properly Styled Code | GSA 18F Front End Guide | 0 linking errors and 0 warnings | Combination of manual review and automated testing | Code follows 18F guidelines for properly styled code. Properly Styled Code |
Accessible | Web Content Accessibility Guidelines 2.1 AA (WCAG 2.1 AA) standards | 0 errors reported for WCAG 2.1 AA standards using an automated scanner and 0 errors reported in manual testing | CodeSniffer or pa11y for CI/CD | All web content is accessible. We perform a11y testing through the use of tools, which typically catch 30% of a11y issues, & manual testing, which covers the other 70%. We consider a11y in the design choices we make. Additional tools include WAVE and Contrast Checker ad hoc, NVDA for manual testing |
Deployed | Code must successfully build and deploy into staging environment | Successful build with a single command | Combination of manual review and automated testing | Code successfully builds with a single command. A staging environment is required & utilized to verify changes in a non-prod environment. |
Secure | OWASP Application Security Verification Standard 3.0 | Code submitted must be free of medium- and high-level static and dynamic security vulnerabilities | Clean tests from a static testing SaaS (such as Gemnasium) and from OWASP ZAP, along with documentation explaining any false positives | Dependabot is turned on for our repo. We utilize a dynamic scanner. This is part of our Definition of Done. |
User Research | Usability testing and other user research methods must be conducted at regular intervals throughout the development process (not just at the beginning or end). | Research plans and artifacts from usability testing and/or other research methods with end-users are available at the end of every applicable sprint, in accordance with the vendor’s research plan. | TTS will evaluate the artifacts based on a research plan provided by the vendor at the end of the second sprint and every applicable sprint thereafter. | For each project, we create research plans before performing user research in order to plan out what we are trying to learn from the interaction with a user. Research plans exist for interviews & surveys. |
TTS QASP for Software Development - Design and Research PDF
A New QASP in the Works... New Draft QASP
Contact the team at [email protected] or reach out to Sarah Statz.