NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Yang, Ji Seung – Journal of Educational and Behavioral Statistics, 2018
The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a…
Descriptors: Intervals, Scores, Item Response Theory, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement
PDF pending restoration PDF pending restoration
Green, Bert F. – 2002
Maximum likelihood and Bayesian estimates of proficiency, typically used in adaptive testing, use item weights that depend on test taker proficiency to estimate test taker proficiency. In this study, several methods were explored through computer simulation using fixed item weights, which depend mainly on the items difficulty. The simpler scores…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Yen, Wendy M. – 1982
The three-parameter logistic model discussed was used by CTB/McGraw-Hill in the development of the Comprehensive Tests of Basic Skills, Form U (CTBS/U) and the Test of Cognitive Skills (TCS), published in the fall of 1981. The development, standardization, and scoring of the tests are described, particularly as these procedures were influenced by…
Descriptors: Achievement Tests, Bayesian Statistics, Cognitive Processes, Data Collection
Peer reviewed Peer reviewed
Segall, Daniel O. – Psychometrika, 1996
Maximum likelihood and Bayesian procedures are presented for item selection and scoring of multidimensional adaptive tests. A demonstration with simulated response data illustrates that multidimensional adaptive testing can provide equal or higher reliabilities with fewer items than are required in one-dimensional adaptive testing. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Equations (Mathematics)