NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Njål Foldnes; Jonas Moss; Steffen Grønneberg – Structural Equation Modeling: A Multidisciplinary Journal, 2025
We propose new ways of robustifying goodness-of-fit tests for structural equation modeling under non-normality. These test statistics have limit distributions characterized by eigenvalues whose estimates are highly unstable and biased in known directions. To take this into account, we design model-based trend predictions to approximate the…
Descriptors: Goodness of Fit, Structural Equation Models, Robustness (Statistics), Prediction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Dexin; Lee, Taehun; Fairchild, Amanda J.; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2020
This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a…
Descriptors: Factor Analysis, Statistical Analysis, Computation, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Önen, Emine – Universal Journal of Educational Research, 2019
This simulation study was conducted to compare the performances of Frequentist and Bayesian approaches in the context of power to detect model misspecification in terms of omitted cross-loading in CFA models with respect to the several variables (number of omitted cross-loading, magnitude of main loading, number of factors, number of indicators…
Descriptors: Factor Analysis, Bayesian Statistics, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David – Educational Measurement: Issues and Practice, 2016
Since Cronbach's (1951) elaboration of a from its introduction by Guttman (1945), this coefficient has become ubiquitous in characterizing assessment instruments in education, psychology, and other social sciences. Also ubiquitous are caveats on the calculation and interpretation of this coefficient. This article summarizes a recent contribution…
Descriptors: Computation, Correlation, Test Theory, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Measurement and Evaluation in Counseling and Development, 2017
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Descriptors: Test Bias, Item Response Theory, Factor Analysis, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guasch, Marc; Haro, Juan; Boada, Roger – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Descriptors: Factor Analysis, Language Processing, Design, Cluster Grouping
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Hakan – International Journal of Assessment Tools in Education, 2018
The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent…
Descriptors: Simulation, Context Effect, Computation, Statistical Analysis
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory; Fox, Jean-Paul – Journal of Educational and Behavioral Statistics, 2015
An aggregation strategy is proposed to potentially address practical limitation related to computing resources for two-level multidimensional item response theory (MIRT) models with large data sets. The aggregate model is derived by integration of the normal ogive model, and an adaptation of the stochastic approximation expectation maximization…
Descriptors: Factor Analysis, Item Response Theory, Grade 4, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Schulz, Andreas – Mathematical Thinking and Learning: An International Journal, 2018
Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…
Descriptors: Computation, Mathematics Instruction, Multiplication, Arithmetic
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Weiyi; Lei, Pui-Wa; DiPerna, James C. – Journal of Experimental Education, 2017
The purpose of the current study was to examine dimensionality and concurrent validity evidence of the EARLI numeracy measures (DiPerna, Morgan, & Lei, 2007), which were developed to assess key skills such as number identification, counting, and basic arithmetic. Two methods (NOHARM with approximate chi-square test and DIMTEST with DETECT…
Descriptors: Construct Validity, Numeracy, Mathematics Tests, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Christie; Root, Melissa M.; Koriakin, Taylor; Choi, Dowon; Luria, Sarah R.; Bray, Melissa A.; Sassu, Kari; Maykel, Cheryl; O'Rourke, Patricia; Courville, Troy – Journal of Psychoeducational Assessment, 2017
This study investigated developmental gender differences in mathematics achievement, using the child and adolescent portion (ages 6-19 years) of the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants were divided into two age categories: 6 to 11 and 12 to 19. Error categories within the Math Concepts & Applications…
Descriptors: Gender Differences, Error Patterns, Mathematics Tests, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Courtney, Matthew Gordon Ray – Practical Assessment, Research & Evaluation, 2013
Exploratory factor analysis (EFA) is a common technique utilized in the development of assessment instruments. The key question when performing this procedure is how to best estimate the number of factors to retain. This is especially important as under- or over-extraction may lead to erroneous conclusions. Although recent advancements have been…
Descriptors: Factor Analysis, Computer Software, Open Source Technology, Computation
Previous Page | Next Page »
Pages: 1  |  2  |  3