ERIC Number: EJ1480466
Record Type: Journal
Publication Date: 2025-Dec
Pages: 18
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: EISSN-2196-0739
Available Date: 2025-08-14
Linking Errors Introduced by Rapid Guessing Responses When Employing Multigroup Concurrent IRT Scaling
Large-scale Assessments in Education, v13 Article 28 2025
Background: Test score comparability in international large-scale assessments (LSAs) is greatly important to ensure test fairness. To effectively compare test scores on an international scale, score linking is widely used to convert raw scores from different linguistic version of test forms into a common score scale. An example is the multigroup concurrent IRT calibration method, which is used for estimating item and ability parameters across multiple linguistic groups of test-takers. Although prior researchers demonstrated its effectiveness in offering greater global comparability in score scales, they assumed comparable test-taking efforts across cultural and linguistic populations. This assumption may not hold true due to differential rapid guessing (RG) rates, potentially biasing item parameter estimation. To address this gap, this study aimed to investigate the linking errors introduced by RG responses when employing multigroup concurrent IRT calibration. Method: In the analysis, RG responses were identified using response time data. The study utilized data from the Arabic and Chinese groups in the PISA 2018 Form 18 science module. Test scores for these two linguistic groups were linked through multigroup concurrent IRT calibration, which applies common item parameters across most items and groups, while allowing a select few items to have group-specific parameters. The fit of the item-level model was assessed to identify items requiring group-specific parameters. Results: The Arabic group showed a notably higher RG rates for the selected test form when comparing to the Chinese group. Despite observed differential RG, the multigroup concurrent IRT calibration procedure showed robustness to anchor and misfit item identification and ability estimation. However, differential RG was found to have the potential to reduce the precision of individual ability estimation. Findings suggest that RG can influence the multigroup concurrent IRT calibration process, potentially compromising the fairness of test scores in international LSAs. Conclusion: This study highlights the critical need to identify and address noneffortful test-taking behaviors, such as RG, to ensure comparability of test scores across different linguistic versions of an assessment. Additionally, documenting variations in test-taking efforts across countries and languages is essential for accurate student performance evaluation and informed educational decisions.
Descriptors: Guessing (Tests), Item Response Theory, Error Patterns, Arabic, Chinese, International Assessment, Foreign Countries, Achievement Tests, Secondary School Students, Culture Fair Tests, Scoring Rubrics, Science Tests, Comparative Analysis, Test Items, Item Analysis
Springer. Available from: Springer Nature. One New York Plaza, Suite 4600, New York, NY 10004. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-460-1700; e-mail: customerservice@springernature.com; Web site: https://link-springer-com.bibliotheek.ehb.be/
Related Records: ED657107
Publication Type: Journal Articles; Reports - Research
Education Level: Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: Program for International Student Assessment
Grant or Contract Numbers: N/A
Author Affiliations: 1Human Resources Research Organization, Alexandria, USA

Peer reviewed
Direct link
