Publication Date
| In 2026 | 0 |
| Since 2025 | 5 |
| Since 2022 (last 5 years) | 22 |
| Since 2017 (last 10 years) | 48 |
| Since 2007 (last 20 years) | 112 |
Descriptor
Source
Author
| Thompson, Marilyn S. | 6 |
| Ahn, Soyeon | 4 |
| Edwards, Michael C. | 4 |
| Finch, Holmes | 4 |
| Green, Samuel B. | 4 |
| Jin, Ying | 3 |
| Levy, Roy | 3 |
| Myers, Nicholas D. | 3 |
| Walters, Glenn D. | 3 |
| Alvarado, Jesús M. | 2 |
| Asparouhov, Tihomir | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 7 |
| Postsecondary Education | 5 |
| Elementary Education | 4 |
| Secondary Education | 4 |
| High Schools | 3 |
| Middle Schools | 3 |
| Grade 1 | 2 |
| Junior High Schools | 2 |
| Early Childhood Education | 1 |
| Primary Education | 1 |
Audience
| Researchers | 3 |
Location
| Turkey | 2 |
| United Kingdom | 2 |
| Australia | 1 |
| Czech Republic | 1 |
| Dominican Republic | 1 |
| Indiana | 1 |
| Louisiana | 1 |
| Maryland | 1 |
| Mauritania | 1 |
| Russia | 1 |
| Taiwan | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Cao, Mengyang; Song, Q. Chelsea; Tay, Louis – International Journal of Testing, 2018
There is a growing use of noncognitive assessments around the world, and recent research has posited an ideal point response process underlying such measures. A critical issue is whether the typical use of dominance approaches (e.g., average scores, factor analysis, and the Samejima's graded response model) in scoring such measures is adequate.…
Descriptors: Comparative Analysis, Item Response Theory, Factor Analysis, Models
Zeynivandnezhad, Fereshteh; Rashed, Fatemeh; Kanooni, Arman – Anatolian Journal of Education, 2019
Factor analysis is a statistical technique that is widely used in psychology and social sciences. Using computers and statistical packages, implementation of multivariate factor analysis and other multivariate methods becomes possible for researchers. Exploratory factor analysis and confirmatory factor analysis are applied in different studies;…
Descriptors: Factor Analysis, Technological Literacy, Pedagogical Content Knowledge, Mathematics Teachers
Olivera-Aguilar, Margarita; Rikoon, Samuel H.; Gonzalez, Oscar; Kisbu-Sakarya, Yasemin; MacKinnon, David P. – Educational and Psychological Measurement, 2018
When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to…
Descriptors: Error of Measurement, Statistical Analysis, Factor Analysis, Simulation
Pornchanok Ruengvirayudh – ProQuest LLC, 2018
Determining the number of dimensions underlying many variables in the data or many items in the test is a crucial process prior to performing exploratory factor analysis. Failure to do so leads to serious consequences concerning construct validity. Parallel analysis (PA) has been found to be useful to determine the number of dimensions (i.e.,…
Descriptors: Monte Carlo Methods, Tests, Data, Sample Size
Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy – Educational and Psychological Measurement, 2016
Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…
Descriptors: Accuracy, Factor Analysis, Hypothesis Testing, Monte Carlo Methods
Clark, D. Angus; Bowles, Ryan P. – Grantee Submission, 2018
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present…
Descriptors: Factor Analysis, Goodness of Fit, Factor Structure, Monte Carlo Methods
Fan, Yi; Lance, Charles E. – Educational and Psychological Measurement, 2017
The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…
Descriptors: Multitrait Multimethod Techniques, Correlation, Goodness of Fit, Models
Dai, Shenghai; Svetina, Dubravka; Wang, Xiaolin – Journal of Educational and Behavioral Statistics, 2017
There is an increasing interest in reporting test subscores for diagnostic purposes. In this article, we review nine popular R packages (subscore, mirt, TAM, sirt, CDM, NPCD, lavaan, sem, and OpenMX) that are capable of implementing subscore-reporting methods within one or more frameworks including classical test theory, multidimensional item…
Descriptors: Diagnostic Tests, Scores, Computer Software, Item Response Theory
Dogucu, Mine – ProQuest LLC, 2017
When researchers fit statistical models to multiply imputed datasets, they have to fit the model separately for each imputed dataset. Since there are multiple datasets, there are always multiple sets of model results. It is possible for some of these sets of results not to converge while some do converge. This study examined occurrence of such a…
Descriptors: Statistical Analysis, Error of Measurement, Goodness of Fit, Monte Carlo Methods
Dimitrov, Dimiter M. – Measurement and Evaluation in Counseling and Development, 2017
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Descriptors: Test Bias, Item Response Theory, Factor Analysis, Evaluation Methods
Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan – Sociological Methods & Research, 2017
We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…
Descriptors: Bayesian Statistics, Regression (Statistics), Models, Observation
Li, Ming; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Descriptors: Simulation, Comparative Analysis, Monte Carlo Methods, Guidelines
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis

Peer reviewed
Direct link
