NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; Ketterlin-Geller, Leanne R. – Practical Assessment, Research & Evaluation, 2020
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically…
Descriptors: Formative Evaluation, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kevelson, Marisol J. C. – ETS Research Report Series, 2019
This study presents estimates of Black-White, Hispanic-White, and income achievement gaps using data from two different types of reading and mathematics assessments: constructed-response assessments that were likely more cognitively demanding and state achievement tests that were likely less cognitively demanding (i.e., composed solely or largely…
Descriptors: Racial Differences, Achievement Gap, White Students, African American Students
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Hyun-Jeong; Lee, Jaehoon; Kingston, Neal – Applied Measurement in Education, 2012
This study examined the validity of test accommodation in third-eighth graders using differential item functioning (DIF) and mixture IRT models. Two data sets were used for these analyses. With the first data set (N = 51,591) we examined whether item type (i.e., story, explanation, straightforward) or item features were associated with item…
Descriptors: Testing Accommodations, Test Bias, Item Response Theory, Validity