NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 46 to 60 of 172 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Fox, Jean-Paul; Marianti, Sukaesi – Journal of Educational Measurement, 2017
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…
Descriptors: Accuracy, Reaction Time, Statistics, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Educational Measurement: Issues and Practice, 2017
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Descriptors: Cutting Scores, Item Response Theory, Bayesian Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Tuerlinckx, Francis; De Boeck, Paul – Journal of Educational and Behavioral Statistics, 2015
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
Descriptors: Test Bias, Test Items, Regression (Statistics), Scores
Kim, Weon H. – ProQuest LLC, 2017
The purpose of the present study is to apply the item response theory (IRT) and testlet response theory (TRT) models to a reading comprehension test. This study applied the TRT models and the traditional IRT model to a seventh-grade reading comprehension test (n = 8,815) with eight testlets. These three models were compared to determine the best…
Descriptors: Item Response Theory, Test Items, Correlation, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Ping – Journal of Educational and Behavioral Statistics, 2017
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Descriptors: Test Items, Item Response Theory, Test Construction, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, In-Hee; Wilson, Mark – Educational and Psychological Measurement, 2015
An essential feature of the linear logistic test model (LLTM) is that item difficulties are explained using item design properties. By taking advantage of this explanatory aspect of the LLTM, in a mixture extension of the LLTM, the meaning of latent classes is specified by how item properties affect item difficulties within each class. To improve…
Descriptors: Classification, Test Items, Difficulty Level, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua – Journal of Educational Measurement, 2015
The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…
Descriptors: Reaction Time, Test Items, Accuracy, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Arenson, Ethan A.; Karabatsos, George – Grantee Submission, 2017
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Descriptors: Bayesian Statistics, Item Response Theory, Nonparametric Statistics, Models
Lamsal, Sunil – ProQuest LLC, 2015
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Descriptors: Item Response Theory, Monte Carlo Methods, Maximum Likelihood Statistics, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Edwards, Julianne M. – Educational and Psychological Measurement, 2016
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Descriptors: Item Response Theory, Computation, Nonparametric Statistics, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S. – Educational Research and Reviews, 2016
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Descriptors: Maximum Likelihood Statistics, Computation, Item Response Theory, Test Items
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12