NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Kvalseth, Tarald O. – Educational and Psychological Measurement, 1991
An asymmetric version of J. Cohen's kappa statistic is presented as an appropriate measure for the agreement between two observers classifying items into nominal categories, when one observer represents the "standard." A numerical example with three categories is provided. (SLD)
Descriptors: Classification, Equations (Mathematics), Interrater Reliability, Mathematical Models
Peer reviewed Peer reviewed
Towstopiat, Olga – Contemporary Educational Psychology, 1984
The present article reviews the procedures that have been developed for measuring the reliability of human observers' judgments when making direct observations of behavior. These include the percentage of agreement, Cohen's Kappa, phi, and univariate and multivariate agreement measures that are based on quasi-equiprobability and quasi-independence…
Descriptors: Interrater Reliability, Mathematical Models, Multivariate Analysis, Observation
Rowley, Glenn L. – 1986
Classroom researchers are frequently urged to provide evidence of the reliability of their data. In the case of observational data, three approaches to this have emerged: observer agreement, generalizability theory, and measurement error. Generalizability theory provides the most powerful approach given an adequate data collection design, but…
Descriptors: Classroom Observation Techniques, Classroom Research, Correlation, Elementary Education
Webber, Larry; And Others – 1986
Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool.…
Descriptors: Behavior Rating Scales, Cardiovascular System, Error of Measurement, Generalizability Theory