NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Location
China1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok – Journal of Educational Measurement, 2022
Abstract This study proposes a new Bayesian differential item functioning (DIF) detection method using posterior predictive model checking (PPMC). Item fit measures including infit, outfit, observed score distribution (OSD), and Q1 were considered as discrepancy statistics for the PPMC DIF methods. The performance of the PPMC DIF method was…
Descriptors: Test Items, Bayesian Statistics, Monte Carlo Methods, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben – Applied Psychological Measurement, 2011
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Descriptors: Item Response Theory, Sampling, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Meade, Adam W.; Lautenschlager, Gary J.; Johnson, Emily C. – Applied Psychological Measurement, 2007
This article highlights issues associated with the use of the differential functioning of items and tests (DFIT) methodology for assessing measurement invariance (or differential functioning) with Likert-type data. Monte Carlo analyses indicate relatively low sensitivity of the DFIT methodology for identifying differential item functioning (DIF)…
Descriptors: Measures (Individuals), Monte Carlo Methods, Likert Scales, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O. – Structural Equation Modeling: A Multidisciplinary Journal, 2007
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
Descriptors: Test Items, Monte Carlo Methods, Program Effectiveness, Data Analysis
De Ayala, R. J.; Kim, Seock-Ho; Stapleton, Laura M.; Dayton, C. Mitchell – 1999
Differential item functioning (DIF) may be defined as an item that displays different statistical properties for different groups after the groups are matched on an ability measure. For instance, with binary data, DIF exists when there is a difference in the conditional probabilities of a correct response for two manifest groups. This paper…
Descriptors: Item Bias, Monte Carlo Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Shuqun, Yang; Shuliang, Ding; Zhiqiang, Yao – International Journal of Distance Education Technologies, 2009
Cognitive diagnosis (CD) plays an important role in intelligent tutoring system. Computerized adaptive testing (CAT) is adaptive, fair, and efficient, which is suitable to large-scale examination. Traditional cognitive diagnostic test needs quite large number of items, the efficient and tailored CAT could be a remedy for it, so the CAT with…
Descriptors: Monte Carlo Methods, Distance Education, Adaptive Testing, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
Raju, Nambury S.; And Others – Applied Psychological Measurement, 1995
Internal measures of differential functioning of items and tests (DFIT) based on item response theory (IRT) are proposed. The new differential test functioning index leads to noncompensatory DIF indices. Monte Carlo studies demonstrate that these indices are accurate in assessing DIF. (SLD)
Descriptors: Item Response Theory, Monte Carlo Methods, Test Bias, Test Items
Peer reviewed Peer reviewed
Allen, Nancy L.; Donoghue, John R. – Journal of Educational Measurement, 1996
Examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure through a Monte Carlo study. Suggests the superiority of the pooled booklet method when items are selected for examinees according to a balanced incomplete block design. Discusses implications for other DIF…
Descriptors: Item Bias, Monte Carlo Methods, Research Design, Sampling
Peer reviewed Peer reviewed
Nandakumar, Ratna; Yu, Feng; Li, Hsin-Hung; Stout, William – Applied Psychological Measurement, 1998
Investigated the performance of the Poly-DIMTEST (PD) procedure (and associated computer program) in assessing the unidimensionality of test data produced by polytomous items through Monte Carlo simulation. Results show that PD can confirm unidimensionality for unidimensional simulated data and can detect lack of unidimensionality. (SLD)
Descriptors: Evaluation Methods, Item Response Theory, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Patz, Richard J.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 1999
Extends the basic Markov chain Monte Carlo (MCMC) strategy of R. Patz and B. Junker (1999) for Bayesian inference in complex Item Response Theory settings to address issues such as nonresponse, designed missingness, multiple raters, guessing behaviors, and partial credit (polytomous) test items. Applies the MCMC method to data from the National…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Monte Carlo Methods
Kromrey, Jeffrey D.; Parshall, Cynthia G.; Yi, Qing – 1998
The effects of anchor test characteristics in the accuracy and precision of test equating in the "common items, nonequivalent groups design" were studied. The study also considered the effects of nonparallel based and new forms on the equating solution, and it investigated the effects of differential weighting on the success of equating…
Descriptors: Equated Scores, High Schools, Item Response Theory, Monte Carlo Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4