Kappa Coefficients

Two novice raters will be selected to read the excerpts and themes in order to determine inter-rater reliability.  Raters will be provided a list of excerpts and themes.  They will be instructed to read each excerpt and each theme prior to analysis.  Once they have read over the materials, they are instructed to read each excerpt, one by one, and rate whether they believe each of the themes are either present or absent.  The raters were instructed to indicate which themes were present or absent in each of the excerpts.  The rater could determine that none, one, some, or all of the themes were present in the excerpts.  Each theme could be present in more than one excerpt and each excerpt could contain more than one theme.

A Kappa coefficient will be used to verify the presence of the themes that were presented.  The Kappa coefficient is a statistical measure of inter-rater reliability or agreement that is used to assess qualitative documents and determine agreement between two raters.  The equation used to calculate kappa is:

Κ = PR(e),

where Pr(a) is the observed agreement among the raters and Pr(e) is the hypothetical probability of the raters indicating a chance agreement.  The formula was entered into Microsoft Excel and it was used to calculate the Kappa coefficient.  Table 1 below presents an image to depict each step of the calculation.  Kappa coefficients are interpreted using the guidelines outlined by Landis and Koch (1977), where strength of the kappa coefficients is interpreted in the following manner: 0.01-0.20 slight; 0.21-0.40 fair; 0.41-0.60 moderate; 0.61-0.80 substantial; 0.81-1.00 almost perfect.

Table 1

 Rater 1 Rater 2 Marker is present in a set of excerpts Marker is absent in a set of excerpts Subtotal Marker is present in a set of excerpts A B A+B Marker is absent in a set of excerpts C D C+D Subtotal A+C B+D A+B+C+D

Note. A, B, C, and D are the frequencies in which a marker is identified in the same expert between raters 1 and 2.

Observed agreement = (A+D)

Expected agreement = (((A+B)*(A+C))+((C+D)*(B+D)))/(A+B+C+D)

Kappa = ((Observed agreement) - (Expected agreement))/((A+B+C+D) - (Expected agreement))

References

Landis, J. R., & Koch, G. G. (1977).  The measurement of observer agreement for categorical data.  Biometrics, 33, 159-174

Quanta Healthcare Solutions, Inc. (2002).  Calculating the kappa coefficient for 2 observations by 2 observers. The Medical Algorithms Project. Retrieved January 2, 2003 from http://www.medal.org

Statistics Solutions. (2013). Data analysis plan: Kappa Coefficients [WWW Document]. Retrieved from http://www.statisticssolutions.com/academic-solutions/member-resources/member-profile/data-analysis-plan-templates/data-analysis-plan-kappa-coefficients/