NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Feuerstahler, Leah; Wilson, Mark – Journal of Educational Measurement, 2019
Scores estimated from multidimensional item response theory (IRT) models are not necessarily comparable across dimensions. In this article, the concept of aligned dimensions is formalized in the context of Rasch models, and two methods are described--delta dimensional alignment (DDA) and logistic regression alignment (LRA)--to transform estimated…
Descriptors: Item Response Theory, Models, Scores, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Sheng, Yanyan; Wikle, Christopher K. – Educational and Psychological Measurement, 2008
As item response models gain increased popularity in large-scale educational and measurement testing situations, many studies have been conducted on the development and applications of unidimensional and multidimensional models. Recently, attention has been paid to IRT-based models with an overall ability dimension underlying several ability…
Descriptors: Test Items, Individual Testing, Item Response Theory, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Habing, Brian; Finch, Holmes; Roberts, James S. – Applied Psychological Measurement, 2005
Although there are many methods available for dimensionality assessment for items with monotone item response functions, there are few methods available for unfolding item response theory models. In this study, a modification of Yen's Q3 statistic is proposed for the case of these nonmonotone item response models. Through a simulation study, the…
Descriptors: Data Analysis, Simulation, Multidimensional Scaling, Item Response Theory
Sireci, Stephen G.; Bastari, B. – 1998
In many cross-cultural research studies, assessment instruments are translated or adapted for use in multiple languages. However, it cannot be assumed that different language versions of an assessment are equivalent across languages. A fundamental issue to be addressed is the comparability or equivalence of the construct measured by each language…
Descriptors: Construct Validity, Cross Cultural Studies, Evaluation Methods, Multidimensional Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Leighton, Jacqueline P.; Tan, Xuan – Journal of Educational Measurement, 2006
DETECT, the acronym for Dimensionality Evaluation To Enumerate Contributing Traits, is an innovative and relatively new nonparametric dimensionality assessment procedure used to identify mutually exclusive, dimensionally homogeneous clusters of items using a genetic algorithm ( Zhang & Stout, 1999). Because the clusters of items are mutually…
Descriptors: Program Evaluation, Cluster Grouping, Evaluation Methods, Multivariate Analysis