NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 58 results Save | Export
Mark L. Davison; David J. Weiss; Joseph N. DeWeese; Ozge Ersan; Gina Biancarosa; Patrick C. Kennedy – Journal of Educational and Behavioral Statistics, 2023
A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional,…
Descriptors: Cognitive Processes, Diagnostic Tests, Measurement, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Spencer Salas; Maryann Mraz; Susan Green; Brian Keith Williams – English Teaching Forum, 2024
This article uses the Stephen Crane story "The Open Boat" (freely available on the American English website) as an anchor text to demonstrate how teachers can apply Raphael's Question-Answer Relationship (QAR) technique to a text that students might be assigned to read. The article includes numerous examples and tips that teachers can…
Descriptors: Questioning Techniques, Responses, Reading Comprehension, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Christhilf, Katerina; Newton, Natalie; Butterfuss, Reese; McCarthy, Kathryn S.; Allen, Laura K.; Magliano, Joseph P.; McNamara, Danielle S. – International Educational Data Mining Society, 2022
Prompting students to generate constructed responses as they read provides a window into the processes and strategies that they use to make sense of complex text. In this study, Markov models examined the extent to which: (1) patterns of strategies; and (2) strategy combinations could be used to inform computational models of students' text…
Descriptors: Markov Processes, Reading Strategies, Reading Comprehension, Models
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Grantee Submission, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Joshua B. Gilbert – Annenberg Institute for School Reform at Brown University, 2022
This simulation study examines the characteristics of the Explanatory Item Response Model (EIRM) when estimating treatment effects when compared to classical test theory (CTT) sum and mean scores and item response theory (IRT)-based theta scores. Results show that the EIRM and IRT theta scores provide generally equivalent bias and false positive…
Descriptors: Item Response Theory, Models, Test Theory, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Wagner, Richard K.; Moxley, Jerad; Schatschneider, Chris; Zirps, Fotena A. – Scientific Studies of Reading, 2023
Purpose: Bayesian-based models for diagnosis are common in medicine but have not been incorporated into identification models for dyslexia. The purpose of the present study was to evaluate Bayesian identification models that included a broader set of predictors and that capitalized on recent developments in modeling the prevalence of dyslexia.…
Descriptors: Bayesian Statistics, Identification, Dyslexia, Models
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tatarinova, Galiya; Neamah, Nour Raheem; Mohammed, Aisha; Hassan, Aalaa Yaseen; Obaid, Ali Abdulridha; Ismail, Ismail Abdulwahhab; Maabreh, Hatem Ghaleb; Afif, Al Khateeb Nashaat Sultan; Viktorovna, Shvedova Irina – International Journal of Language Testing, 2023
Unidimensionality is an important assumption of measurement but it is violated very often. Most of the time, tests are deliberately constructed to be multidimensional to cover all aspects of the intended construct. In such situations, the application of unidimensional item response theory (IRT) models is not justifieddue to poor model fit and…
Descriptors: Item Response Theory, Test Items, Language Tests, Correlation
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Chao-Jung; Liu, Chia-Yu; Yang, Chung-Hsuan; Jian, Yu-Cin – European Journal of Psychology of Education, 2021
Despite decades of research on the close link between eye movements and human cognitive processes, the exact nature of the link between eye movements and deliberative thinking in problem-solving remains unknown. Thus, this study explored the critical eye-movement indicators of deliberative thinking and investigated whether visual behaviors could…
Descriptors: Eye Movements, Reading Comprehension, Screening Tests, Scores
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Geramipour, Masoud – Language Testing in Asia, 2021
Rasch testlet and bifactor models are two measurement models that could deal with local item dependency (LID) in assessing the dimensionality of reading comprehension testlets. This study aimed to apply the measurement models to real item response data of the Iranian EFL reading comprehension tests and compare the validity of the bifactor models…
Descriptors: Foreign Countries, Second Language Learning, English (Second Language), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Tabatabaee-Yazdi, Mona – SAGE Open, 2020
The Hierarchical Diagnostic Classification Model (HDCM) reflects on the sequences of the presentation of the essential materials and attributes to answer the items of a test correctly. In this study, a foreign language reading comprehension test was analyzed employing HDCM and the generalized deterministic-input, noisy and gate (G-DINA) model to…
Descriptors: Diagnostic Tests, Classification, Models, Reading Comprehension
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4