NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Michelle Cheong – Journal of Computer Assisted Learning, 2025
Background: Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity and loss of critical thinking skills. Many articles suggested educators redesign assessments that are more 'Generative-AI-resistant' and to focus on assessing students on higher order…
Descriptors: Artificial Intelligence, Performance Based Assessment, Spreadsheets, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Tremblay, Kathryn A.; Binder, Katherine S.; Ardoin, Scott P.; Talwar, Amani; Tighe, Elizabeth L. – Journal of Research in Reading, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Grade 3, Elementary School Students, Reading Comprehension, Decoding (Reading)
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Carla; Schatschneider, Christopher – Journal of Speech, Language, and Hearing Research, 2019
Purpose: This study examines the response patterns of 278 Spanish-English dual language learners (DLLs) on a standardized test of receptive English vocabulary. Method: Investigators analyzed responses to 131 items on the Peabody Picture Vocabulary Test--Fourth Edition (Dunn & Dunn, 2007) focusing on differential accuracy on items influenced by…
Descriptors: Spanish, English, Receptive Language, Vocabulary
Nelson, Gena; Powell, Sarah R. – Assessment for Effective Intervention, 2018
Though proficiency with computation is highly emphasized in national mathematics standards, students with mathematics difficulty (MD) continue to struggle with computation. To learn more about the differences in computation error patterns between typically achieving students and students with MD, we assessed 478 third-grade students on a measure…
Descriptors: Computation, Mathematics Instruction, Learning Problems, Mathematics Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Binglin; West, Matthew; Ziles, Craig – International Educational Data Mining Society, 2018
This paper attempts to quantify the accuracy limit of "nextitem-correct" prediction by using numerical optimization to estimate the student's probability of getting each question correct given a complete sequence of item responses. This optimization is performed without an explicit parameterized model of student behavior, but with the…
Descriptors: Accuracy, Probability, Student Behavior, Test Items
Kathryn A. Tremblay; Katherine S. Binder; Scott P. Ardoin; Armani Talwar; Elizabeth L. Tighe – Grantee Submission, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Reading Strategies, Eye Movements, Expository Writing, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Nelson, Gena; Powell, Sarah R – Grantee Submission, 2017
Though proficiency with computation is highly emphasized in national mathematics standards, students with mathematics difficulty (MD) continue to struggle with computation. To learn more about the differences in computation error patterns between typically achieving students and students with MD, we assessed 478 3rd-grade students on a measure of…
Descriptors: Computation, Mathematics Instruction, Learning Problems, Mathematics Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Hanmu; Zhang, Hanmu – Journal of Education and Learning, 2019
Since understanding reading assignments is important to succeeding in school, improving the way that text is arranged in books would be an efficient way to help students better understand the material and perform well on tests. In this study, we asked students to read two original and two rearranged historical passages, in which rephrased…
Descriptors: Test Items, Textbook Preparation, Retention (Psychology), Recall (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Jackson, Margaret C.; Linden, David E. J.; Roberts, Mark V.; Kriegeskorte, Nikolaus; Haenschel, Corinna – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased…
Descriptors: Visual Perception, Short Term Memory, Test Items, Cognitive Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Valdez, Alfred – International Journal of Higher Education, 2013
Metacognitive monitoring processes have been shown to be critical determinants of human learning. Metacognitive monitoring consist of various knowledge estimates that enable learners to engage in self-regulatory processes important for both the acquisition of knowledge and the monitoring of one's knowledge when engaged in assessment. This study…
Descriptors: Metacognition, Accuracy, Correlation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making