Descriptor
Source
| Structural Equation Modeling | 24 |
Author
| Raykov, Tenko | 2 |
| Wang, Lin | 2 |
| Anderson, Ronald D. | 1 |
| Bandalos, Deborah L. | 1 |
| Cheung, Gordon W. | 1 |
| Dolan, Connor V. | 1 |
| Dunbar, Stephen B. | 1 |
| Fan, Xitao | 1 |
| Ferguson, Aaron J. | 1 |
| Finch, John F. | 1 |
| Foss, Tron | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 24 |
| Reports - Evaluative | 15 |
| Reports - Research | 6 |
| Reports - Descriptive | 3 |
| Speeches/Meeting Papers | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedKaplan, David; Ferguson, Aaron J. – Structural Equation Modeling, 1999
Examines the use of sample weights in latent variable models in the case where a simple random sample is drawn from a population containing a mixture of strata through a bootstrap simulation study. Results show that ignoring weights can lead to serious bias in latent variable model parameters and reveal the advantages of using sample weights. (SLD)
Descriptors: Models, Sample Size, Simulation, Statistical Bias
Peer reviewedCheung, Gordon W.; Rensvold, Roger B. – Structural Equation Modeling, 2002
Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…
Descriptors: Goodness of Fit, Models, Sample Size, Simulation
Peer reviewedJulian, Marc W. – Structural Equation Modeling, 2001
Examined the effects of ignoring multilevel data structures in nonhierarchical covariance modeling using a Monte Carlo simulation. Results suggest that when the magnitudes of intraclass correlations are less than 0.05 and the group size is small, the consequences of ignoring the data dependence within the multilevel data structures seem to be…
Descriptors: Correlation, Monte Carlo Methods, Sample Size, Simulation
Peer reviewedRaykov, Tenko; Marcoulides, George A. – Structural Equation Modeling, 2000
Outlines a method for comparing completely standardized solutions in multiple groups. The method is based on a correlation structure analysis of equal-size samples and uses the correlation distribution theory implemented in the structural equation modeling program RAMONA. (SLD)
Descriptors: Comparative Analysis, Correlation, Sample Size, Structural Equation Models
Kim, Kevin H. – Structural Equation Modeling, 2005
The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…
Descriptors: Structural Equation Models, Sample Size, Goodness of Fit
Peer reviewedLubke, Gitta H.; Dolan, Connor V. – Structural Equation Modeling, 2003
Simulation results show that the power to detect small mean differences when fitting a model with free residual variances across groups decreases as the difference in R squared increases. This decrease is more pronounced in the presence of correlated errors and if group sample sizes differ. (SLD)
Descriptors: Correlation, Factor Structure, Sample Size, Simulation
Peer reviewedStapleton, Laura M. – Structural Equation Modeling, 2002
Studied the use of different weighting techniques in structural equation modeling and found, through simulation, that the use of an effective sample size weight provides unbiased estimates of key parameters and their sampling variances. Also discusses use of a popular normalization technique of scaling weights. (SLD)
Descriptors: Estimation (Mathematics), Sample Size, Scaling, Simulation
Peer reviewedMuthen, Linda K.; Muthen, Bengt O. – Structural Equation Modeling, 2002
Demonstrates how substantive researchers can use a Monte Carlo study to decide on sample size and determine power. Presents confirmatory factor analysis and growth models as examples, conducting these analyses with the Mplus program (B. Muthen and L. Muthen 1998). (SLD)
Descriptors: Monte Carlo Methods, Power (Statistics), Research Methodology, Sample Size
Peer reviewedRaykov, Tenko – Structural Equation Modeling, 2000
Shows that the conventional noncentrality parameter estimator of covariance structure models, currently implemented in popular structural modeling programs, possesses asymptotically potentially large bias, variance, and mean squared error (MSE). Presents a formal expression for its large-sample bias and quantifies large-sample bias and MSE. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Sample Size, Statistical Bias
Peer reviewedNevitt, Jonathan; Hancock, Gregory R. – Structural Equation Modeling, 2001
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…
Descriptors: Estimation (Mathematics), Power (Statistics), Sample Size, Structural Equation Models
Peer reviewedHox, Joop J.; Maas, Cora J. M. – Structural Equation Modeling, 2001
Assessed the robustness of an estimation method for multilevel and path analysis with hierarchical data proposed by B. Muthen (1989) with unequal groups and small sample sizes and in the presence of a low or high intraclass correlation. Simulation results show the effects of varying these conditions on the within-group and between-groups part of…
Descriptors: Estimation (Mathematics), Robustness (Statistics), Sample Size, Simulation
Peer reviewedMarsh, Herbert W. – Structural Equation Modeling, 1998
Sample covariance matrices constructed with pairwise deletion for randomly missing data were used in a simulation with three sample sizes and five levels of missing data (up to 50%). Parameter estimates were unbiased, parameter variability was largely explicable, and no sample covariance matrices were nonpositive definite except for 50% missing…
Descriptors: Estimation (Mathematics), Goodness of Fit, Sample Size, Simulation
Peer reviewedGreen, Kathy E. – Structural Equation Modeling, 1996
Scales constructed using principal components and Rasch measurement methods are compared under conditions of unclear constructs and marginal sample sizes with three data sets of increasing complexity. Results of the two methods were identical when data were stable and the structure unidimensional. (SLD)
Descriptors: Factor Analysis, Item Response Theory, Measurement Techniques, Sample Size
Peer reviewedOlmos, Antonio; Hutchinson, Susan R. – Structural Equation Modeling, 1998
The behavior of eight measures of fit used to evaluate confirmatory factor analysis models was studied through Monte Carlo simulation to determine the extent to which sample size, model size, estimation procedure, and level of nonnormality affect fit when analyzing polytomous data. Implications of results for evaluating fit are discussed. (SLD)
Descriptors: Estimation (Mathematics), Goodness of Fit, Monte Carlo Methods, Sample Size
Meade, Adam W.; Lautenschlager, Gary J. – Structural Equation Modeling, 2004
In recent years, confirmatory factor analytic (CFA) techniques have become the most common method of testing for measurement equivalence/invariance (ME/I). However, no study has simulated data with known differences to determine how well these CFA techniques perform. This study utilizes data with a variety of known simulated differences in factor…
Descriptors: Factor Structure, Sample Size, Monte Carlo Methods, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
