Skip to content

US Digital Services Playbook Play 4

mattkwong-kpmg edited this page Mar 3, 2017 · 7 revisions

#Play 4: Build the Service Using Agile and Iterative Practices We should use an incremental, fast-paced style of software development to reduce the risk of failure. We want to get working software into users’ hands as early as possible to give the design and development team opportunities to adjust based on user feedback about the service. A critical capability is being able to automatically test and deploy the service so that new features can be added often and be put into production easily.

##Checklist

  • 1. Ship a functioning “minimum viable product” (MVP) that solves a core user need as soon as possible, no longer than three months from the beginning of the project, using a “beta” or “test” period if needed

For the prototype we had a deadline of less than one month to deliver a minimum viable product, so the deployed prototype is that MVP.

  • 2. Run usability tests frequently to see how well the service works and identify improvements that should be made

We used periodic usability testing sessions to make refinements to the user stories and designs. The findings from these sessions were then assigned and implemented in our three sprints.

  • 3. Ensure the individuals building the service communicate closely using techniques such as launch meetings, war rooms, daily standups, and team chat tools

We have a skilled team dispersed across 3 time zones, communicating and coordinating using Github, Slack, SharePoint, Email, and Phone. We have daily scrum meetings, weekly sprint reviews (the planned sprint interval), and organize additional coordination meetings as needed.

  • 4. Keep delivery teams small and focused; limit organizational layers that separate these teams from the business owners

Our core prototype team is the twelve people listed on this Roles and Responsibilities wiki page. We identified one product owner and one agile coach (scrum master) to reduce unnecessary overhead per Scrum framework.

  • 5. Release features and improvements multiple times each month

For the prototype we had a deadline of less than one month to deliver a minimum viable product, but in that one month we planned three sprints to release features and improvements.

  • 6. Create a prioritized list of features and bugs, also known as the “feature backlog” and “bug backlog”

We used the Github Issues board to track features and bugs per the Product Owner's definition.

  • 7. Use a source code version control system

We used git via Github as our source code version control system.

  • 8. Give the entire project team access to the issue tracker and version control system

The entire prototype team has access to the Github repository and issues board.

  • 9. Use code reviews to ensure quality

We identified technical architect Robert Levy's code review as part of our Definition of Done. In addition, additional code reviewers were assigned as necessary during development process.

##Key Questions

  1. How long did it take to ship the MVP? If it hasn’t shipped yet, when will it?

For the prototype we had a deadline of less than one month to deliver a minimum viable product and we built the best prototype we could within that time.

  1. How long does it take for a production deployment?

Not including the Product Owner approval that must occur, an automated build, test, and deploy to Production takes about twenty minutes.

  1. How many days or weeks are in each iteration/sprint?

For the prototype we had a deadline of less than one month to deliver a minimum viable product, so our sprints were timed about a week each.

  1. Which version control system is being used?

We used git via Github as our source code version control system.

  1. How are bugs tracked and tickets issued? What tool is used?

We used the Github Issues board to track features and bugs per the Product Owner's definition.

  1. How is the feature backlog managed? What tool is used?

We used the Github Issues board to track our product backlog per the Product Owner's definition.

  1. How often do you review and reprioritize the feature and bug backlog?

We formally reviewed and reprioritized at sprint reviews, but otherwise updated as soon as the product owner identified a reason to reprioritize (e.g. user findings, technical updates).

  1. How do you collect user feedback during development? How is that feedback used to improve the service?

We logged user feedback in the Github wiki and as issues on the Github Issues board at the Product Owner's discretion.

  1. At each stage of usability testing, which gaps were identified in addressing user needs?

Gaps identified during usability testing were promptly added to the Github Issues board at the Product Owner's discretion.

#US Digital Services Playbook

  1. Play 1 Understand what people need
  2. Play 2 Address the whole experience, from start to finish
  3. Play 3 Make it simple and intuitive
  4. Play 4 Build the service using agile and iterative practices
  5. Play 5 Structure budgets and contracts to support delivery
  6. Play 6 Assign one leader and hold that person accountable
  7. Play 7 Bring in experienced teams
  8. Play 8 Choose a modern technology stack
  9. Play 9 Deploy in flexible hosting environment
  10. Play 10 Automate testing and deployments
  11. Play 11 Manage security and privacy through reusable processes
  12. Play 12 Use data to drive decisions
  13. Play 13 Default to open
Clone this wiki locally