NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 121 to 135 of 172 results Save | Export
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Peer reviewed Peer reviewed
Berger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Lin, Miao-Hsiang; Hsiung, Chao A. – Psychometrika, 1994
Two simple empirical approximate Bayes estimators are introduced for estimating domain scores under binomial and hypergeometric distributions respectively. Criteria are established regarding use of these functions over maximum likelihood estimation counterparts. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computation, Equations (Mathematics)
Mislevy, Robert J.; Wilson, Mark – 1992
Standard item response theory (IRT) models posit latent variables to account for regularities in students' performance on test items. They can accommodate learning only if the expected changes in performance are smooth, and, in an appropriate metric, uniform over items. Wilson's "Saltus" model extends the ideas of IRT to development that…
Descriptors: Bayesian Statistics, Change, Development, Item Response Theory
Glas, Cees A. W.; van der Linden, Wim J. – 2001
To reduce the cost of item writing and to enhance the flexibility of item presentation, items can be generated by item-cloning techniques. An important consequence of cloning is that it may cause variability on the item parameters. Therefore, a multilevel item response model is presented in which it is assumed that the item parameters of a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Costs
Johnson, Matthew S.; Sinharay, Sandip – 2003
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
Descriptors: Bayesian Statistics, Constructed Response, Educational Assessment, Estimation (Mathematics)
Peer reviewed Peer reviewed
Bock, R. Darrell; And Others – Applied Psychological Measurement, 1988
A method of item factor analysis is described, which is based on Thurstone's multiple-factor model and implemented by marginal maximum likelihood estimation and the EM algorithm. Also assessed are the statistical significance of successive factors added to the model, provisions for guessing and omitted items, and Bayes constraints. (TJH)
Descriptors: Algorithms, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy; Mislevy, Robert J. – International Journal of Testing, 2004
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
Descriptors: Computer Assisted Testing, Markov Processes, Computer Networks, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
Kirisci, Levent; Hsu, Tse-Chi – 1988
The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Item Analysis
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation
Peer reviewed Peer reviewed
Mislevy, Robert J. – Psychometrika, 1984
Assuming vectors of item responses depend on ability through a fully specified item response model, this paper presents maximum likelihood equations for estimating the population parameters without estimating an ability parameter for each subject. Asymptotic standard errors, tests of fit, computing approximations, and details of four special cases…
Descriptors: Bayesian Statistics, Estimation (Mathematics), Goodness of Fit, Latent Trait Theory
Peer reviewed Peer reviewed
van der Linden, Wim J., Ed. – Applied Psychological Measurement, 1986
New theory and practice in testing is replacing the standard test by the test item bank and classical test theory by item response theory. Eight papers and a commentary are presented in this special issue concerning test item banking. (SLD)
Descriptors: Adaptive Testing, Algorithms, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Jensema, Carl J. – Applied Psychological Measurement, 1977
Owen's Bayesian tailored testing method is introduced along with a brief review of its derivation. The characteristics of a good item bank are outlined and explored in terms of their influence on the Bayesian tailoring process. (Author/RC)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Oriented Programs
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12