Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 9 |
| Since 2017 (last 10 years) | 17 |
| Since 2007 (last 20 years) | 30 |
Descriptor
| Accuracy | 30 |
| Comparative Analysis | 30 |
| Sample Size | 30 |
| Item Response Theory | 12 |
| Monte Carlo Methods | 12 |
| Computation | 11 |
| Error of Measurement | 10 |
| Test Items | 10 |
| Statistical Analysis | 7 |
| Test Length | 7 |
| Classification | 6 |
| More ▼ | |
Source
Author
| Moses, Tim | 2 |
| Albano, Anthony D. | 1 |
| Allan S. Cohen | 1 |
| Anil, Duygu | 1 |
| Bellara, Aarti | 1 |
| Chang, Hua-Hua | 1 |
| Chen, Hanwei | 1 |
| Christopher E. Shank | 1 |
| Chun Wang | 1 |
| Cikrikci, Rahime Nukhet | 1 |
| Cui, Zhongmin | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 20 |
| Reports - Research | 20 |
| Dissertations/Theses -… | 7 |
| Reports - Evaluative | 3 |
| Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
| Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Christopher E. Shank – ProQuest LLC, 2024
This dissertation compares the performance of equivalence test (EQT) and null hypothesis test (NHT) procedures for identifying invariant and noninvariant factor loadings under a range of experimental manipulations. EQT is the statistically appropriate approach when the research goal is to find evidence of group similarity rather than group…
Descriptors: Factor Analysis, Goodness of Fit, Intervals, Comparative Analysis
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Kalkan, Ömür Kaya – Measurement: Interdisciplinary Research and Perspectives, 2022
The four-parameter logistic (4PL) Item Response Theory (IRT) model has recently been reconsidered in the literature due to the advances in the statistical modeling software and the recent developments in the estimation of the 4PL IRT model parameters. The current simulation study evaluated the performance of expectation-maximization (EM),…
Descriptors: Comparative Analysis, Sample Size, Test Length, Algorithms
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Guler, Gul; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
The purpose of this study was to investigate the Type I Error findings and power rates of the methods used to determine dimensionality in unidimensional and bidimensional psychological constructs for various conditions (characteristic of the distribution, sample size, length of the test, and interdimensional correlation) and to examine the joint…
Descriptors: Comparative Analysis, Error of Measurement, Decision Making, Factor Analysis
Chun Wang; Ruoyi Zhu; Gongjun Xu – Grantee Submission, 2022
Differential item functioning (DIF) analysis refers to procedures that evaluate whether an item's characteristic differs for different groups of persons after controlling for overall differences in performance. DIF is routinely evaluated as a screening step to ensure items behavior the same across groups. Currently, the majority DIF studies focus…
Descriptors: Models, Item Response Theory, Item Analysis, Comparative Analysis
No, Unkyung; Hong, Sehee – Educational and Psychological Measurement, 2018
The purpose of the present study is to compare performances of mixture modeling approaches (i.e., one-step approach, three-step maximum-likelihood approach, three-step BCH approach, and LTB approach) based on diverse sample size conditions. To carry out this research, two simulation studies were conducted with two different models, a latent class…
Descriptors: Sample Size, Classification, Comparative Analysis, Statistical Analysis
Paulsen, Justin; Valdivia, Dubravka Svetina – Journal of Experimental Education, 2022
Cognitive diagnostic models (CDMs) are a family of psychometric models designed to provide categorical classifications for multiple latent attributes. CDMs provide more granular evidence than other psychometric models and have potential for guiding teaching and learning decisions in the classroom. However, CDMs have primarily been conducted using…
Descriptors: Psychometrics, Classification, Teaching Methods, Learning Processes
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Inal, Hatice; Anil, Duygu – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to examine the impact of differential item functioning in anchor items on the group invariance in test equating for different sample sizes. Within this scope, the factors chosen to investigate the group invariance in test equating were sample size, frequency of sample size of subgroups, differential form of differential…
Descriptors: Equated Scores, Test Bias, Test Items, Sample Size
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Liu, Chunyan; Kolen, Michael J. – Journal of Educational Measurement, 2018
Smoothing techniques are designed to improve the accuracy of equating functions. The main purpose of this study is to compare seven model selection strategies for choosing the smoothing parameter (C) for polynomial loglinear presmoothing and one procedure for model selection in cubic spline postsmoothing for mixed-format pseudo tests under the…
Descriptors: Comparative Analysis, Accuracy, Models, Sample Size
Kang, Hyeon-Ah; Lu, Ying; Chang, Hua-Hua – Applied Measurement in Education, 2017
Increasing use of item pools in large-scale educational assessments calls for an appropriate scaling procedure to achieve a common metric among field-tested items. The present study examines scaling procedures for developing a new item pool under a spiraled block linking design. The three scaling procedures are considered: (a) concurrent…
Descriptors: Item Response Theory, Accuracy, Educational Assessment, Test Items
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
