Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Evaluation Methods | 7 |
| Multidimensional Scaling | 7 |
| Simulation | 7 |
| Item Response Theory | 4 |
| Item Analysis | 3 |
| Models | 3 |
| Data Analysis | 2 |
| Evaluation Research | 2 |
| Predictor Variables | 2 |
| Sample Size | 2 |
| Statistical Analysis | 2 |
| More ▼ | |
Source
| Journal of Educational… | 2 |
| Applied Psychological… | 1 |
| Educational Sciences: Theory… | 1 |
| Educational and Psychological… | 1 |
| Psychological Methods | 1 |
Author
Publication Type
| Journal Articles | 6 |
| Reports - Research | 6 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Adult Education | 1 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 8 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Trends in International… | 1 |
What Works Clearinghouse Rating
Feuerstahler, Leah; Wilson, Mark – Journal of Educational Measurement, 2019
Scores estimated from multidimensional item response theory (IRT) models are not necessarily comparable across dimensions. In this article, the concept of aligned dimensions is formalized in the context of Rasch models, and two methods are described--delta dimensional alignment (DDA) and logistic regression alignment (LRA)--to transform estimated…
Descriptors: Item Response Theory, Models, Scores, Comparative Analysis
Öztürk-Gübes, Nese; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2016
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
Descriptors: Test Format, Item Response Theory, True Scores, Equated Scores
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Sheng, Yanyan; Wikle, Christopher K. – Educational and Psychological Measurement, 2008
As item response models gain increased popularity in large-scale educational and measurement testing situations, many studies have been conducted on the development and applications of unidimensional and multidimensional models. Recently, attention has been paid to IRT-based models with an overall ability dimension underlying several ability…
Descriptors: Test Items, Individual Testing, Item Response Theory, Evaluation Methods
Habing, Brian; Finch, Holmes; Roberts, James S. – Applied Psychological Measurement, 2005
Although there are many methods available for dimensionality assessment for items with monotone item response functions, there are few methods available for unfolding item response theory models. In this study, a modification of Yen's Q3 statistic is proposed for the case of these nonmonotone item response models. Through a simulation study, the…
Descriptors: Data Analysis, Simulation, Multidimensional Scaling, Item Response Theory
Sireci, Stephen G.; Bastari, B. – 1998
In many cross-cultural research studies, assessment instruments are translated or adapted for use in multiple languages. However, it cannot be assumed that different language versions of an assessment are equivalent across languages. A fundamental issue to be addressed is the comparability or equivalence of the construct measured by each language…
Descriptors: Construct Validity, Cross Cultural Studies, Evaluation Methods, Multidimensional Scaling
Gierl, Mark J.; Leighton, Jacqueline P.; Tan, Xuan – Journal of Educational Measurement, 2006
DETECT, the acronym for Dimensionality Evaluation To Enumerate Contributing Traits, is an innovative and relatively new nonparametric dimensionality assessment procedure used to identify mutually exclusive, dimensionally homogeneous clusters of items using a genetic algorithm ( Zhang & Stout, 1999). Because the clusters of items are mutually…
Descriptors: Program Evaluation, Cluster Grouping, Evaluation Methods, Multivariate Analysis

Peer reviewed
Direct link
