NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
Program for International…3
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Eglington, Luke G.; Pavlik, Philip I., Jr. – International Journal of Artificial Intelligence in Education, 2023
An important component of many Adaptive Instructional Systems (AIS) is a 'Learner Model' intended to track student learning and predict future performance. Predictions from learner models are frequently used in combination with mastery criterion decision rules to make pedagogical decisions. Important aspects of learner models, such as learning…
Descriptors: Computer Assisted Instruction, Intelligent Tutoring Systems, Learning Processes, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Eglington, Luke G.; Pavlik, Philip I., Jr. – Grantee Submission, 2022
An important component of many Adaptive Instructional Systems (AIS) is a 'Learner Model' intended to track student learning and predict future performance. Predictions from learner models are frequently used in combination with mastery criterion decision rules to make pedagogical decisions. Important aspects of learner models, such as learning…
Descriptors: Computer Assisted Instruction, Intelligent Tutoring Systems, Learning Processes, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Grantee Submission, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S. – Grantee Submission, 2017
Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…
Descriptors: Arithmetic, Computation, Models, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Harring, Jeffrey R.; Weiss, Brandi A.; Li, Ming – Educational and Psychological Measurement, 2015
Several studies have stressed the importance of simultaneously estimating interaction and quadratic effects in multiple regression analyses, even if theory only suggests an interaction effect should be present. Specifically, past studies suggested that failing to simultaneously include quadratic effects when testing for interaction effects could…
Descriptors: Structural Equation Models, Statistical Analysis, Monte Carlo Methods, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Warker, Jill A.; Dell, Gary S. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015
Novel phonotactic constraints can be acquired by hearing or speaking syllables that follow a novel constraint. When learned from hearing syllables, these newly learned constraints generalize to syllables that were not experienced during training. However, generalization of phonotactic learning to novel syllables has never been persuasively…
Descriptors: Experimental Psychology, Syllables, Generalization, Speech Communication
Peer reviewed Peer reviewed
Direct linkDirect link
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Klingler, Severin; Käser, Tanja; Solenthaler, Barbara; Gross, Markus – International Educational Data Mining Society, 2015
Modeling student knowledge is a fundamental task of an intelligent tutoring system. A popular approach for modeling the acquisition of knowledge is Bayesian Knowledge Tracing (BKT). Various extensions to the original BKT model have been proposed, among them two novel models that unify BKT and Item Response Theory (IRT). Latent Factor Knowledge…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Item Response Theory, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Previous Page | Next Page »
Pages: 1  |  2  |  3