Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 10 |
Descriptor
| Sample Size | 14 |
| Test Length | 14 |
| Item Response Theory | 10 |
| Computation | 7 |
| Monte Carlo Methods | 6 |
| Error of Measurement | 4 |
| Maximum Likelihood Statistics | 4 |
| Models | 4 |
| Test Items | 4 |
| Comparative Analysis | 3 |
| Simulation | 3 |
| More ▼ | |
Source
| Applied Psychological… | 14 |
Author
| Finch, Holmes | 2 |
| Stark, Stephen | 2 |
| Woods, Carol M. | 2 |
| Cheng, Ying | 1 |
| Chernyshenko, Oleksandr S. | 1 |
| De Ayala, R. J. | 1 |
| Drasgow, Fritz | 1 |
| Furlow, Carolyn F. | 1 |
| Gagne, Phill | 1 |
| Hambleton, Ronald K. | 1 |
| Hoyle, Rick H. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Research | 9 |
| Reports - Evaluative | 5 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lathrop, Quinn N.; Cheng, Ying – Applied Psychological Measurement, 2013
Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…
Descriptors: Item Response Theory, Accuracy, Classification, Computation
Lei, Pui-Wa; Zhao, Yu – Applied Psychological Measurement, 2012
Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…
Descriptors: Scaling, Item Response Theory, Test Length, Sample Size
Nandakumar, Ratna; Yu, Feng; Zhang, Yanwei – Applied Psychological Measurement, 2011
DETECT is a nonparametric methodology to identify the dimensional structure underlying test data. The associated DETECT index, "D[subscript max]," denotes the degree of multidimensionality in data. Conditional covariances (CCOV) are the building blocks of this index. In specifying population CCOVs, the latent test composite [theta][subscript TT]…
Descriptors: Nonparametric Statistics, Statistical Analysis, Tests, Data
Roberts, James S.; Thompson, Vanessa M. – Applied Psychological Measurement, 2011
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Descriptors: Statistical Analysis, Markov Processes, Computation, Monte Carlo Methods
Yang, Chongming; Nay, Sandra; Hoyle, Rick H. – Applied Psychological Measurement, 2010
Lengthy scales or testlets pose certain challenges for structural equation modeling (SEM) if all the items are included as indicators of a latent construct. Three general approaches to modeling lengthy scales in SEM (parceling, latent scoring, and shortening) have been reviewed and evaluated. A hypothetical population model is simulated containing…
Descriptors: Structural Equation Models, Measures (Individuals), Sample Size, Item Response Theory
Furlow, Carolyn F.; Ross, Terris Raiford; Gagne, Phill – Applied Psychological Measurement, 2009
Douglas, Roussos, and Stout introduced the concept of differential bundle functioning (DBF) for identifying the underlying causes of differential item functioning (DIF). In this study, reference group was simulated to have higher mean ability than the focal group on a nuisance dimension, resulting in DIF for each of the multidimensional items…
Descriptors: Test Bias, Test Items, Reference Groups, Simulation
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Woods, Carol M. – Applied Psychological Measurement, 2008
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Woods, Carol M. – Applied Psychological Measurement, 2007
Ramsay curve item response theory (RC-IRT) was recently developed to detect and correct for nonnormal latent variables when unidimensional IRT models are fitted to data using maximum marginal likelihood estimation. The purpose of this research is to evaluate the performance of RC-IRT for Likert-type item responses with varying test lengths, sample…
Descriptors: Test Length, Item Response Theory, Sample Size, Comparative Analysis
Monahan, Patrick O.; Stump, Timothy E.; Finch, Holmes; Hambleton, Ronald K. – Applied Psychological Measurement, 2007
DETECT is a nonparametric "full" dimensionality assessment procedure that clusters dichotomously scored items into dimensions and provides a DETECT index of magnitude of multidimensionality. Four factors (test length, sample size, item response theory [IRT] model, and DETECT index) were manipulated in a Monte Carlo study of bias, standard error,…
Descriptors: Test Length, Sample Size, Monte Carlo Methods, Geometric Concepts
Peer reviewedDe Ayala, R. J. – Applied Psychological Measurement, 1994
Previous work on the effects of dimensionality on parameter estimation for dichotomous models is extended to the graded response model. Datasets are generated that differ in the number of latent factors as well as their interdimensional association, number of test items, and sample size. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Maximum Likelihood Statistics, Sample Size
Peer reviewedStark, Stephen; Drasgow, Fritz – Applied Psychological Measurement, 2002
Describes item response and information functions for the Zinnes and Griggs paired comparison item response theory (IRT) model (1974) and presents procedures for estimating stimulus and person parameters. Monte Carlo simulations show that at least 400 ratings are required to obtain reasonably accurate estimates of the stimulus parameters and their…
Descriptors: Comparative Analysis, Computer Simulation, Error of Measurement, Item Response Theory
Peer reviewedStone, Clement A. – Applied Psychological Measurement, 1992
Monte Carlo methods are used to evaluate marginal maximum likelihood estimation of item parameters and maximum likelihood estimates of theta in the two-parameter logistic model for varying test lengths, sample sizes, and assumed theta distributions. Results with 100 datasets demonstrate the methods' general precision and stability. Exceptions are…
Descriptors: Computer Software Evaluation, Estimation (Mathematics), Mathematical Models, Maximum Likelihood Statistics
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S. – Applied Psychological Measurement, 2006
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Descriptors: Computation, Monte Carlo Methods, Markov Processes, Item Response Theory

Direct link
