Skip to content

Reproducibility enhancing modules

hlageek edited this page May 12, 2024 · 3 revisions

On the Report page you will find several modules to increase researcher reflexivity and coding transparency. These modules only work in the web version of reQual, as it allows coding by multiple coders and comparison between them.

Summary

The first tab Summary provides an overview of the number of segments tagged by each code in each document. If we want to know how often and where a particular coder used the codes, we only select her in the Select users menu. Similarly, we can limit the selection to only certain codes or documents if there are many of them and the overall table is too large.

image

Agreement

The Agreement tab offers the possibility to make the agreement between codes, coders and their attributes visible. When we expand the Select metrics menu, we see two sets of controls to choose from. The first one shows the agreement calculated based on the number of overlapping letters (characters), the second one shows the agreement based on the overlap of the marked segments regardless of the number of characters (segment). The calculation is done using Jaccard's similarity coefficient, which is calculated as the ratio of the number of identically coded characters to the number of all coded characters without the overlaps.

We can illustrate the difference between agreement in characters and agreement in segments by an example:

  • Coder A marks: But it's just me, the problem with me is that I don't drink beer, I don't like it (55 characters).
  • Coder B marks: But just me, the problem with me is that I don't drink beer, I don't like it. I'm not an alcoholic. (71 characters)

The character agreement will be 0.77 according to the calculation 55/(55+71-55) = 0.77
The agreement in segments will be 1.00 according to the calculation 1/(1+1-1) = 1.00

  • Coder A indicates: But just me, the problem with me is that I don't drink beer, I don't like it. (55 characters)
  • Coder B indicates: But it's just me, the problem with me is that I don't drink beer, (...) I'm not an alcoholic. (43 + 16 = 59 characters)

The agreement in characters will be 0.77 according to the calculation 43/(55+59-43) = 0.61
The match in segments will be 1.00 based on the calculation 1/(1+2-1) = 0.50

From these examples we can see that each criterion is sensitive to different coding qualities. While the number of common characters is more of a mechanical measure of agreement, the agreement in segments is more reflective of the meaning component of coding the data.

Total overlap [character]

Calculates the total coding overlap in characters across all documents for the selected coders. This table provides an overview of the total overlap of coders in the project.

Overlap by code [character]

Calculates the coding agreement in characters across all documents for each code and for the selected coders. This table provides an overview of whether the overlap varies by code. It allows the identification of problematic codes.

Overlap by coder [character]

Displays a heatmap that compares pairs of coders based on their character agreement across all documents and all codes. The lighter the color, the higher the agreement. This table allows identification of pairs of coders that are " aligned" overall, and those that are least aligned.

Overlap by coder and code [character]

Displays a heatmap for each code separately, comparing pairs of coders based on their agreement in characters across all documents. The lighter the color, the higher the agreement. This table allows you to see if the coder pair matching is similar across all codes, or if and how the matching differs for different codes.

Overlap by user attribute and code [character]

Displays a heatmap for the selected user (encoder) attribute that compares each group of coders based on their agreement in characters for each code in all documents. The lighter the color, the higher the agreement. This table may reveal codes that are sensitive to any of the coders' attributes. For example, in the diagram we see that for code 2, there are more matches between males and females than between males, which shows that the gender of the coders does not play a role in this code.

image

Segment matching

The same set of indicators exists for agreement in segments. Their interpretation is the same, with the only difference that matching is not counted in identically coded characters, but in overlapping segments.

Text overlap

If we want to examine the coding agreement in more detail, we use the Text overlap tab. In the menu, we select which document we want to display, which code, and possibly certain coders, if we want information about, for example, a two-coder agreement.

Clicking the Browse button will show us the coders overlaps for the given code and document. The lighter the color, the higher the agreement between the coders.

When we hover the mouse over a highlighted segment, the names of the coders who coded it are displayed. This provides a good basis for a team discussion about coding. For example, about how long segments to code or how distinct instances of a phenomenon to code.

image

Tip! This module can also be used to allow users with permissions to view the report to see how individual coders coded, e.g. in a class project.

Clone this wiki locally