Once you have collected data, you must somehow transform it into evidence that you can use to answer your assessment questions. Quantitative data can undergo quantitative analysis, and qualitative data can be analyzed either qualitatively or quantitatively.
Given the variety of methods and goals for assessing a connected learning initiative, not to mention the variation in connected learning programs in general, the exact process you follow to analyze your qualitative data will also vary. However, you will probably be conducting a form of content analysis, which occurs in four main phases.
- Prepare data for analysis. If your gathered data informally (via talkback boards, for instance, or short on-the-fly interviews), or if it is not textual data (in the case of creative artifacts) it is probably somewhat unstructured. Organize and give your data some structure to make a systematic analysis easier. This process might involve creating ID numbers for each item, typing up handwritten data, or dividing answers to different interview questions into columns on a spreadsheet.
- Create a coding schema. Codes are descriptive labels that are applied to specific parts of your data. You can probably put together an initial coding schema based on your evaluation questions. If multiple people are involved in analysis, make sure they agree on the codes and definitions so that you will have consistent results.
- Apply codes to the data. Be as broad or as specific as you need to ensure you end up with something useful and easy to understand. You may uncover new concepts and themes that require adjustments to your codebook—just make sure everyone agrees on and knows how to apply the modified codes. However, you cannot and should not try to code for everything. Unless you find something truly unexpected and impactful, stay focused on what will help you answer your questions.
- Drawing evidence from the coded data. Even as you are coding data, you should be looking for themes and patterns. Once you’re done, take a step back and look at the big picture. What codes show up the most? Are there codes that are usually found together? Do some participant groups use certain codes more than other groups? How can you synthesize this data into evidence for the answer to your assessment question?
It is often useful to use quantitative (number-based) analysis on qualitative data. For instance, you can count the number of teens who mentioned interest in a STEM career during their interviews, or what percentage of participants completed a challenge. Be careful not to misinterpret the numbers. For instance, a very talkative teen who gave a 10-minute interview might mention a STEM career more often than a quieter person who only talked for 3 minutes, but that doesn’t necessarily mean the talkative person is more interested in STEM careers.
- The PEAR Institute’s Assessment Tools in Informal Science is a database of assessment instruments, complete with user reviews.
- The STEM Learning and Research Center also has a database of assessment instruments.
- The Activation Lab provides instruments to assess science learning activation, as well as an observation protocol to measure engagement with learning.
- IDEO has created a [**Human-Centered Design Kit**](http://www.designkit.org/methods) with dozens of methods for gathering ideas and information, many of which can be used for creatively assessing connected learning programs.
- InformalScience.org has a collection of evaluation materials from informal STEM learning projects, including evaluation reports and assessment instruments.