NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Silva, R. M.; Guan, Y.; Swartz, T. B. – Journal on Efficiency and Responsibility in Education and Science, 2017
This paper attempts to bridge the gap between classical test theory and item response theory. It is demonstrated that the familiar and popular statistics used in classical test theory can be translated into a Bayesian framework where all of the advantages of the Bayesian paradigm can be realized. In particular, prior opinion can be introduced and…
Descriptors: Item Response Theory, Bayesian Statistics, Test Construction, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Stark, Stephen; Chernyshenko, Oleksandr S.; Drasgow, Fritz – Applied Psychological Measurement, 2005
This article proposes an item response theory (IRT) approach to constructing and scoring multidimensional pairwise preference items. Individual statements are administered and calibrated using a unidimensional single-stimulus model. Tests are created by combining multidimensional items with a small number of unidimensional pairings needed to…
Descriptors: Test Construction, Scoring, Mathematical Models, Item Response Theory
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I. – 2002
This paper explores the application of a technique for hierarchical item response theory (IRT) calibration of complex constructed response tasks that has promise both as a calibration tool and as a means of evaluating the isomorphic equivalence of complex constructed response tasks. Isomorphic tasks are explicitly and rigorously designed to be…
Descriptors: Bayesian Statistics, Constructed Response, Estimation (Mathematics), Evaluation Methods
Weiss, David J. – 1983
During 1975-1979 this research into the potential of computerized adaptive testing to reduce errors in the measurement of human capabilities used Marine recruits for a live-testing validity comparison of computerized adaptive and conventional tests. The program purposes were to: (1) identify the most useful computer-based adaptive testing…
Descriptors: Ability, Adaptive Testing, Adults, Bayesian Statistics
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing