Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 10 |
Descriptor
Monte Carlo Methods | 13 |
Test Items | 13 |
Markov Processes | 9 |
Item Response Theory | 8 |
Models | 7 |
Bayesian Statistics | 6 |
Goodness of Fit | 3 |
Item Analysis | 3 |
Reaction Time | 3 |
Accuracy | 2 |
Computation | 2 |
More ▼ |
Source
Journal of Educational and… | 13 |
Author
Douglas, Jeffrey A. | 2 |
Junker, Brian W. | 2 |
Andreas Kurz | 1 |
Can Gürer | 1 |
Case, Susan M. | 1 |
Chang, Hua-Hua | 1 |
Clauser, Brian E. | 1 |
Clemens Draxler | 1 |
Fan, Zhewen | 1 |
Featherman, Carol | 1 |
Jan Philipp Nolte | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 8 |
Reports - Descriptive | 3 |
Reports - Evaluative | 2 |
Education Level
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Justin L. Kern – Journal of Educational and Behavioral Statistics, 2024
Given the frequent presence of slipping and guessing in item responses, models for the inclusion of their effects are highly important. Unfortunately, the most common model for their inclusion, the four-parameter item response theory model, potentially has severe deficiencies related to its possible unidentifiability. With this issue in mind, the…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Generalization
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Clemens Draxler; Andreas Kurz; Can Gürer; Jan Philipp Nolte – Journal of Educational and Behavioral Statistics, 2024
A modified and improved inductive inferential approach to evaluate item discriminations in a conditional maximum likelihood and Rasch modeling framework is suggested. The new approach involves the derivation of four hypothesis tests. It implies a linear restriction of the assumed set of probability distributions in the classical approach that…
Descriptors: Inferences, Test Items, Item Analysis, Maximum Likelihood Statistics
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Sinharay, Sandip; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
Response time models (RTMs) are of increasing interest in educational and psychological testing. This article focuses on the lognormal model for response times, which is one of the most popular RTMs. Several existing statistics for testing normality and the fit of factor analysis models are repurposed for testing the fit of the lognormal model. A…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Zhan, Peida; Jiao, Hong; Man, Kaiwen; Wang, Lijun – Journal of Educational and Behavioral Statistics, 2019
In this article, we systematically introduce the just another Gibbs sampler (JAGS) software program to fit common Bayesian cognitive diagnosis models (CDMs) including the deterministic inputs, noisy "and" gate model; the deterministic inputs, noisy "or" gate model; the linear logistic model; the reduced reparameterized unified…
Descriptors: Bayesian Statistics, Computer Software, Models, Test Items
Trendtel, Matthias; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
A multidimensional Bayesian item response model is proposed for modeling item position effects. The first dimension corresponds to the ability that is to be measured; the second dimension represents a factor that allows for individual differences in item position effects called persistence. This model allows for nonlinear item position effects on…
Descriptors: Bayesian Statistics, Item Response Theory, Test Items, Test Format
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Mariano, Louis T.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 2007
When constructed response test items are scored by more than one rater, the repeated ratings allow for the consideration of individual rater bias and variability in estimating student proficiency. Several hierarchical models based on item response theory have been introduced to model such effects. In this article, the authors demonstrate how these…
Descriptors: Test Items, Item Response Theory, Rating Scales, Scoring

Swanson, David B.; Clauser, Brian E.; Case, Susan M.; Nungester, Ronald J.; Featherman, Carol – Journal of Educational and Behavioral Statistics, 2002
Outlines an approach to differential item functioning (DIF) analysis using hierarchical linear regression that makes it possible to combine results of logistic regression analyses across items to identify consistent sources of DIF, to quantify the proportion of explained variation in DIF coefficients, and to compare the predictive accuracy of…
Descriptors: Item Bias, Monte Carlo Methods, Prediction, Regression (Statistics)

Patz, Richard J.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 1999
Extends the basic Markov chain Monte Carlo (MCMC) strategy of R. Patz and B. Junker (1999) for Bayesian inference in complex Item Response Theory settings to address issues such as nonresponse, designed missingness, multiple raters, guessing behaviors, and partial credit (polytomous) test items. Applies the MCMC method to data from the National…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Monte Carlo Methods

Segall, Daniel O. – Journal of Educational and Behavioral Statistics, 2002
Developed an item response model for characterizing test-compromise that enables the estimation of item preview and score-gain distributions. In the approach, models parameters and posterior distributions are estimated by Markov Chain Monte Carlo procedures. Simulation study results suggest that when at least some test items are known to be…
Descriptors: Estimation (Mathematics), Item Response Theory, Markov Processes, Models
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2006
A lognormal model for the response times of a person on a set of test items is investigated. The model has a parameter structure analogous to the two-parameter logistic response models in item response theory, with a parameter for the speed of each person as well as parameters for the time intensity and discriminating power of each item. It is…
Descriptors: Test Items, Vocational Aptitude, Reaction Time, Markov Processes