NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Conger, Anthony J. – Educational and Psychological Measurement, 2017
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…
Descriptors: Interrater Reliability, Evaluators, Accuracy, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen – International Journal of Research & Method in Education, 2009
The author previously published a paper discussing how to conduct an analysis based on a cluster sample. In that paper, the author outlined several widely adopted alternative approaches, and pointed out that such approaches are anyway not needed for population figures, and not possible for non-probability samples. Thus, the author queried the…
Descriptors: Probability, Misconceptions, Reader Response, Research Methodology
Peer reviewed Peer reviewed
Blackman, Nicole J-M.; Koval, John J. – Applied Psychological Measurement, 1993
Four indexes of agreement between ratings of a person that correct for chance and are interpretable as intraclass correlation coefficients for different analysis of variance models are investigated. Relationships among the estimators are established for finite samples, and the equivalence of these estimators in large samples is demonstrated. (SLD)
Descriptors: Analysis of Variance, Equations (Mathematics), Estimation (Mathematics), Interrater Reliability