NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)3
Since 2007 (last 20 years)10
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; Ketterlin-Geller, Leanne R. – Practical Assessment, Research & Evaluation, 2020
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically…
Descriptors: Formative Evaluation, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atalmis, Erkan Hasan – International Journal of Assessment Tools in Education, 2018
Although multiple-choice items (MCIs) are widely used for classroom assessment, designing MCIs with sufficient number of plausible distracters is very challenging for teachers. In this regard, previous empirical studies reveal that using three-option MCIs provides various advantages when compared to four-option MCIs due to less preparation and…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kevelson, Marisol J. C. – ETS Research Report Series, 2019
This study presents estimates of Black-White, Hispanic-White, and income achievement gaps using data from two different types of reading and mathematics assessments: constructed-response assessments that were likely more cognitively demanding and state achievement tests that were likely less cognitively demanding (i.e., composed solely or largely…
Descriptors: Racial Differences, Achievement Gap, White Students, African American Students
Peer reviewed Peer reviewed
Direct linkDirect link
Schwichow, Martin; Christoph, Simon; Boone, William J.; Härtig, Hendrik – International Journal of Science Education, 2016
The so-called control-of-variables strategy (CVS) incorporates the important scientific reasoning skills of designing controlled experiments and interpreting experimental outcomes. As CVS is a prominent component of science standards appropriate assessment instruments are required to measure these scientific reasoning skills and to evaluate the…
Descriptors: Thinking Skills, Science Instruction, Science Experiments, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chae, Soo Eun; Kim, Doyoung; Han, Jae-Ho – IEEE Transactions on Education, 2012
Those items or test characteristics that are likely to result in differential item functioning (DIF) across accommodated test forms in statewide tests have received little attention. An examination of elementary-level student performance across accommodated test forms in a large-scale mathematics assessment revealed DIF variations by grades,…
Descriptors: Test Bias, Mathematics Tests, Testing Accommodations, Elementary School Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Anagnostopoulou, Kyriaki; Hatzinikita, Vassilia; Christidou, Vasilia; Dimopoulos, Kostas – International Journal of Science Education, 2013
The paper explores the relationship of the global and the local assessment discourses as expressed by Programme for International Student Assessment (PISA) test items and school-based examinations, respectively. To this end, the paper compares PISA test items related to living systems and the context of life, health, and environment, with Greek…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, Discourse Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2012
This was a study of differential item functioning (DIF) for grades 4, 7, and 10 reading and mathematics items from state criterion-referenced tests. The tests were composed of multiple-choice and constructed-response items. Gender DIF was investigated using POLYSIBTEST and a Rasch procedure. The Rasch procedure flagged more items for DIF than did…
Descriptors: Test Bias, Gender Differences, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Hyun-Jeong; Lee, Jaehoon; Kingston, Neal – Applied Measurement in Education, 2012
This study examined the validity of test accommodation in third-eighth graders using differential item functioning (DIF) and mixture IRT models. Two data sets were used for these analyses. With the first data set (N = 51,591) we examined whether item type (i.e., story, explanation, straightforward) or item features were associated with item…
Descriptors: Testing Accommodations, Test Bias, Item Response Theory, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J. – Journal of Visual Impairment & Blindness, 2012
Introduction: This study investigated the use of accommodations and the performance of students with visual impairments and severe cognitive disabilities on the Pennsylvania Alternate System of Assessment (PASA)yCoan alternate performance-based assessment. Methods: Differences in test scores on the most basic level (level A) of the PASA of 286…
Descriptors: Test Items, Visual Impairments, Alternative Assessment, Special Needs Students
Liu, Kimy; Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Tindal, Gerald – Behavioral Research and Teaching, 2008
BRT Math Screening Measures focus on students' mathematics performance in grade-level standards for students in grades 1-8. A total of 24 test forms are available with three test forms per grade corresponding to fall, winter, and spring testing periods. Each form contains computation problems and application problems. BRT Math Screening Measures…
Descriptors: Test Items, Test Format, Test Construction, Item Response Theory