ERIC Number: ED359229
Record Type: RIE
Publication Date: 1993-Jan
Pages: 32
Abstractor: N/A
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Sampling Variability of Performance Assessments. Report on the Status of Generalizability Performance: Generalizability and Transfer of Performance Assessments. Project 2.4: Design Theory and Psychometrics for Complex Performance Assessment in Science.
Shavelson, Richard J.; And Others
In this paper, performance assessments are cast within a sampling framework. A performance assessment score is viewed as a sample of student performance drawn from a complex universe defined by a combination of all possible tasks, occasions, raters, and measurement methods. Using generalizability theory, the authors present evidence bearing on the generalizability (reliability) and convergent validity of performance assessments sampled from a range of measurement facets, measurement methods, and data bases. Results at both the individual and school level indicate that rater-sampling variability is not an issue: raters (e.g. teachers, job incumbents) can be trained to consistently judge performance on complex tasks. Rather, task-sampling variability is the major source of measurement error. Large numbers of tasks are needed to get a reliable measure of mathematics and science achievement at the elementary level, or to get a reliable measure of job performance in the military. With respect to convergent validity, results suggest that methods do not converge. Performance scores, then, are dependent on both the task and method sampled. (Contains 36 references.) (Author)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Evaluators, Generalizability Theory, Interrater Reliability, Job Performance, Mathematics Achievement, Measurement Techniques, Performance Based Assessment, Sampling, Science Achievement, Science Instruction, Scores, Scoring, Student Evaluation, Test Reliability, Test Validity, Training
Publication Type: Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: California Univ., Berkeley. Office of the President.; Office of Educational Research and Improvement (ED), Washington, DC.; National Science Foundation, Washington, DC.
Authoring Institution: National Center for Research on Evaluation, Standards, and Student Testing, Los Angeles, CA.
Grant or Contract Numbers: N/A
Author Affiliations: N/A