NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
de la Torre, Jimmy; Patz, Richard J. – 2002
It is not unusual for several tests measuring different abilities to be given in one test administration. A common practice is to estimate these abilities independently of each other, but this paper proposes a more efficient method of estimating these abilities that takes into account the correlational structure of the abilities. The method uses a…
Descriptors: Ability, Bayesian Statistics, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Fox, Jean-Paul; Glas, Cees A. W. – Psychometrika, 2001
Imposed a two-level regression model on the ability parameters in an item response theory (IRT) model. Uses a simulation study and an empirical data set to show that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. (SLD)
Descriptors: Ability, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Beguin, Anton A.; Glas, Cees A. W. – 1998
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will support multiple samples from multiple…
Descriptors: Ability, Bayesian Statistics, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Nicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 1999
Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Fox, Jean-Paul; Glas, Cees A. W. – 1998
A two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that this offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and…
Descriptors: Ability, Bayesian Statistics, Difficulty Level, Error of Measurement
Peer reviewed Peer reviewed
Berger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
In this study some alternative item selection criteria for adaptive testing are proposed. These criteria take into account the uncertainty of the ability estimates. A general weighted information criterion is suggested of which the usual maximum information criterion and the suggested alternative criteria are special cases. A simulation study was…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Kim, Seock-Ho; And Others – 1992
Hierarchical Bayes procedures were compared for estimating item and ability parameters in item response theory. Simulated data sets from the two-parameter logistic model were analyzed using three different hierarchical Bayes procedures: (1) the joint Bayesian with known hyperparameters (JB1); (2) the joint Bayesian with information hyperpriors…
Descriptors: Ability, Bayesian Statistics, Comparative Analysis, Equations (Mathematics)
DeAyala, R. J.; Koch, William R. – 1986
A computerized flexilevel test was implemented and its ability estimates were compared with those of a Bayesian estimation based computerized adaptive test (CAT) as well as with known true ability estimates. Results showed that when the flexilevel test was terminated according to Lord's criterion, its ability estimates were highly and…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Comparative Analysis
Spray, Judith A.; Reckase, Mark D. – 1994
The issue of test-item selection in support of decision making in adaptive testing is considered. The number of items needed to make a decision is compared for two approaches: selecting items from an item pool that are most informative at the decision point or selecting items that are most informative at the examinee's ability level. The first…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing