NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jianbin Fu; TsungHan Ho; Xuan Tan – Practical Assessment, Research & Evaluation, 2025
Item parameter estimation using an item response theory (IRT) model with fixed ability estimates is useful in equating with small samples on anchor items. The current study explores the impact of three ability estimation methods (weighted likelihood estimation [WLE], maximum a posteriori [MAP], and posterior ability distribution estimation [PST])…
Descriptors: Item Response Theory, Test Items, Computation, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karadavut, Tugba – International Journal of Assessment Tools in Education, 2019
Item Response Theory (IRT) models traditionally assume a normal distribution for ability. Although normality is often a reasonable assumption for ability, it is rarely met for observed scores in educational and psychological measurement. Assumptions regarding ability distribution were previously shown to have an effect on IRT parameter estimation.…
Descriptors: Item Response Theory, Computation, Bayesian Statistics, Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Raiche, Gilles – Psychometrika, 2012
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Descriptors: Item Response Theory, Computation, Bayesian Statistics, Models
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Beland, Sebastien; Raiche, Gilles – Applied Psychological Measurement, 2011
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation