NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ754606
Record Type: Journal
Publication Date: 2007
Pages: 25
Abstractor: Author
ISBN: N/A
ISSN: ISSN-1530-5058
EISSN: N/A
Available Date: N/A
Correcting for Person Misfit in Aggregated Score Reporting
Brown, Richard S.; Villarreal, Julio C.
International Journal of Testing, v7 n1 p1-25 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation, attention, test dimensionality, test bias, or even cheating. Whatever the reason, otherwise sound assessments may yield individual score estimates that are not truly reflective of a subject's underlying trait level. In large-scale testing situations, the extent of individual misfit may materially affect aggregated score reports for relevant subgroups of students. This article investigates the impact of using individual person-fit measures from item response theory (IRT) scored achievement examinations as weighting factors in providing aggregated score reports. About 160,000 students from more than 300 schools completed a mathematics or reading test in a computer adaptive environment. For each subject, a standardized person-fit statistic (Drasgow, Levine, & Williams, 1985) was used to estimate the degree of misfit for each subject. A brief simulation study was done and the results provided cut points for determining misfit in a computer adaptive environment. Step and logistic functions were applied to these person-fit statistics to determine credibility weights for each respondent. Data were then aggregated by using the credibility weights to produce school-level estimates for all students and for each student subgroup. The weighted aggregated estimates were then compared with unweighted estimates to identify the impact of the use of person-fit measures as weighting factors for aggregated score reporting. As expected, weighted group estimates generally produced significantly different group scores than unweighted group estimates but did so differentially across student subgroups (e.g., ethnicity, grade). It is argued that the use of person-fit measures may provide for a useful correction factor for model misfit in generating better aggregated score estimates in large scale testing contexts without jeopardizing relative school standing.
Lawrence Erlbaum Associates, Inc. 10 Industrial Avenue, Mahwah, NJ 07430-2262. Tel: 800-926-6579; Tel: 201-258-2200; Fax: 201-236-0072; e-mail: journals@erlbaum.com; Web site: https://www.LEAonline.com
Publication Type: Journal Articles; Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A