Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 19 |
Descriptor
Bayesian Statistics | 20 |
Computation | 20 |
Item Response Theory | 18 |
Monte Carlo Methods | 10 |
Markov Processes | 9 |
Simulation | 9 |
Models | 7 |
Computer Software | 6 |
Maximum Likelihood Statistics | 6 |
Test Length | 6 |
Test Items | 5 |
More ▼ |
Source
Applied Psychological… | 20 |
Author
Magis, David | 2 |
Raiche, Gilles | 2 |
Shigemasu, Kazuo | 2 |
Wang, Wen-Chung | 2 |
Babcock, Ben | 1 |
Beland, Sebastien | 1 |
Chang, Hua-Hua | 1 |
Chen, Po-Hsi | 1 |
Cho, Sun-Joo | 1 |
Cohen, Allan S. | 1 |
Dai, Yunyun | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 11 |
Reports - Evaluative | 8 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Elementary Education | 1 |
Grade 3 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Florida Comprehensive… | 1 |
What Works Clearinghouse Rating
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong – Applied Psychological Measurement, 2012
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Descriptors: Monte Carlo Methods, Computation, Item Response Theory, Weighted Scores
Doebler, Anna – Applied Psychological Measurement, 2012
It is shown that deviations of estimated from true values of item difficulty parameters, caused for example by item calibration errors, the neglect of randomness of item difficulty parameters, testlet effects, or rule-based item generation, can lead to systematic bias in point estimation of person parameters in the context of adaptive testing.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Item Response Theory
Magis, David; Beland, Sebastien; Raiche, Gilles – Applied Psychological Measurement, 2011
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Fukuhara, Hirotaka; Kamata, Akihito – Applied Psychological Measurement, 2011
A differential item functioning (DIF) detection method for testlet-based data was proposed and evaluated in this study. The proposed DIF model is an extension of a bifactor multidimensional item response theory (MIRT) model for testlets. Unlike traditional item response theory (IRT) DIF models, the proposed model takes testlet effects into…
Descriptors: Item Response Theory, Test Bias, Test Items, Bayesian Statistics
van der Linden, Wim J.; Klein Entink, Rinke H.; Fox, Jean-Paul – Applied Psychological Measurement, 2010
Hierarchical modeling of responses and response times on test items facilitates the use of response times as collateral information in the estimation of the response parameters. In addition to the regular information in the response data, two sources of collateral information are identified: (a) the joint information in the responses and the…
Descriptors: Item Response Theory, Reaction Time, Computation, Bayesian Statistics
Magis, David; Raiche, Gilles – Applied Psychological Measurement, 2010
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Descriptors: Maximum Likelihood Statistics, Computation, Bayesian Statistics, Item Response Theory
Babcock, Ben – Applied Psychological Measurement, 2011
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Descriptors: Item Response Theory, Sampling, Computation, Statistical Analysis
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Meyer, J. Patrick – Applied Psychological Measurement, 2010
An examinee faced with a test item will engage in solution behavior or rapid-guessing behavior. These qualitatively different test-taking behaviors bias parameter estimates for item response models that do not control for such behavior. A mixture Rasch model with item response time components was proposed and evaluated through application to real…
Descriptors: Item Response Theory, Response Style (Tests), Reaction Time, Computation
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo – Applied Psychological Measurement, 2009
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Descriptors: Item Response Theory, Models, Selection, Methods
Previous Page | Next Page ยป
Pages: 1 | 2