NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Davison, Mark L.; Davenport, Ernest C., Jr.; Jia, Hao; Seipel, Ben; Carlson, Sarah E. – Grantee Submission, 2022
A regression model of predictor trade-offs is described. Each regression parameter equals the expected change in Y obtained by trading 1 point from one predictor to a second predictor. The model applies to predictor variables that sum to a constant T for all observations; for example, proportions summing to T=1.0 or percentages summing to T=100…
Descriptors: Regression (Statistics), Prediction, Predictor Variables, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Grantee Submission, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Petscher, Yaacov; Compton, Donald L.; Steacy, Laura; Kinnon, Hannah – Annals of Dyslexia, 2020
Models of word reading that simultaneously take into account item-level and person-level fixed and random effects are broadly known as explanatory item response models (EIRM). Although many variants of the EIRM are available, the field has generally focused on the doubly explanatory model for modeling individual differences on item responses.…
Descriptors: Item Response Theory, Reading Skills, Individual Differences, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Suh, Youngsuk; Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2018
This article presents a multilevel longitudinal nested logit model for analyzing correct response and error types in multilevel longitudinal intervention data collected under a pretest-posttest, cluster randomized trial design. The use of the model is illustrated with a real data analysis, including a model comparison study regarding model…
Descriptors: Hierarchical Linear Modeling, Longitudinal Studies, Error Patterns, Change
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Binglin; West, Matthew; Ziles, Craig – International Educational Data Mining Society, 2018
This paper attempts to quantify the accuracy limit of "nextitem-correct" prediction by using numerical optimization to estimate the student's probability of getting each question correct given a complete sequence of item responses. This optimization is performed without an explicit parameterized model of student behavior, but with the…
Descriptors: Accuracy, Probability, Student Behavior, Test Items
Choi, Hye-Jeong; Cohen, Allan S.; Bottge, Brian A. – Grantee Submission, 2016
The purpose of this study was to apply a random item mixture nominal item response model (RIM-MixNRM) for investigating instruction effects. The host study design was a pre-test-and-post-test, school-based cluster randomized trial. A RIM-MixNRM was used to identify students' error patterns in mathematics at the pre-test and the post-test.…
Descriptors: Item Response Theory, Instructional Effectiveness, Test Items, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Esin Yilmaz; Kelecioglu, Hülya – Journal of Education and Learning, 2017
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
Descriptors: Item Response Theory, Models, Mathematics Tests, Test Items
Klingler, Severin; Käser, Tanja; Solenthaler, Barbara; Gross, Markus – International Educational Data Mining Society, 2015
Modeling student knowledge is a fundamental task of an intelligent tutoring system. A popular approach for modeling the acquisition of knowledge is Bayesian Knowledge Tracing (BKT). Various extensions to the original BKT model have been proposed, among them two novel models that unify BKT and Item Response Theory (IRT). Latent Factor Knowledge…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Item Response Theory, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Previous Page | Next Page »
Pages: 1  |  2