NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
School and College Ability…1
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; McBride, James R. – Measurement: Interdisciplinary Research and Perspectives, 2022
A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal -8 or +8. This article uses a simulation study and data from an operational K-12…
Descriptors: Scores, Adaptive Testing, Computer Assisted Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Uysal, Ibrahim; Atar, Burcu – International Journal of Assessment Tools in Education, 2020
This Monte Carlo simulation study aimed to investigate confirmatory factor analysis (CFA) estimation methods under different conditions, such as sample size, distribution of indicators, test length, average factor loading, and factor structure. Binary data were generated to compare the performance of maximum likelihood (ML), mean and variance…
Descriptors: Factor Analysis, Computation, Methods, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Lamsal, Sunil – ProQuest LLC, 2015
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Descriptors: Item Response Theory, Monte Carlo Methods, Maximum Likelihood Statistics, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Stucky, Brian D.; Thissen, David; Edelen, Maria Orlando – Applied Psychological Measurement, 2013
Test developers often need to create unidimensional scales from multidimensional data. For item analysis, "marginal trace lines" capture the relation with the general dimension while accounting for nuisance dimensions and may prove to be a useful technique for creating short-form tests. This article describes the computations needed to obtain…
Descriptors: Test Construction, Test Length, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Doebler, Anna; Doebler, Philipp; Holling, Heinz – Psychometrika, 2013
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Descriptors: Foreign Countries, Item Response Theory, Computation, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Beland, Sebastien; Raiche, Gilles – Applied Psychological Measurement, 2011
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Hao, Shiqi – Applied Psychological Measurement, 2012
This article introduces two new classification consistency indices that can be used when item response theory (IRT) models have been applied. The new indices are shown to be related to Rudner's classification accuracy index and Guo's classification accuracy index. The Rudner- and Guo-based classification accuracy and consistency indices are…
Descriptors: Item Response Theory, Classification, Accuracy, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S.; Thompson, Vanessa M. – Applied Psychological Measurement, 2011
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Descriptors: Statistical Analysis, Markov Processes, Computation, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Wilson, Mark – Educational and Psychological Measurement, 2011
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Descriptors: Test Bias, Test Length, Statistical Inference, Geometric Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Previous Page | Next Page ยป
Pages: 1  |  2  |  3