NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers12
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 76 to 90 of 349 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gnambs, Timo; Staufenbiel, Thomas – Research Synthesis Methods, 2016
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
Descriptors: Accuracy, Meta Analysis, Factor Structure, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-hwane; Wang, Yan; Ferron, John M. – AERA Online Paper Repository, 2017
Multiple-baseline studies provide meta-analysts the opportunity to compute effect sizes based on either within-series comparisons of treatment phase to baseline phase observations, or time specific between-series comparisons of observations from those that have started treatment to observations of those that are still in baseline. The advantage of…
Descriptors: Meta Analysis, Effect Size, Hierarchical Linear Modeling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Educational and Psychological Measurement, 2018
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…
Descriptors: Multivariate Analysis, Sampling, Statistical Inference, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Mengyang; Tay, Louis; Liu, Yaowu – Educational and Psychological Measurement, 2017
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Descriptors: Monte Carlo Methods, Test Items, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, William Holmes; Hernandez Finch, Maria E. – AERA Online Paper Repository, 2017
High dimensional multivariate data, where the number of variables approaches or exceeds the sample size, is an increasingly common occurrence for social scientists. Several tools exist for dealing with such data in the context of univariate regression, including regularization methods such as Lasso, Elastic net, Ridge Regression, as well as the…
Descriptors: Multivariate Analysis, Regression (Statistics), Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Coulombe, Patrick; Selig, James P.; Delaney, Harold D. – International Journal of Behavioral Development, 2016
Researchers often collect longitudinal data to model change over time in a phenomenon of interest. Inevitably, there will be some variation across individuals in specific time intervals between assessments. In this simulation study of growth curve modeling, we investigate how ignoring individual differences in time points when modeling change over…
Descriptors: Individual Differences, Longitudinal Studies, Simulation, Change
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Jian; Lomax, Richard G. – Journal of Experimental Education, 2017
Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…
Descriptors: Monte Carlo Methods, Structural Equation Models, Evaluation Methods, Measurement Techniques
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun – Grantee Submission, 2017
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Descriptors: Statistical Analysis, Evaluation Methods, Structural Equation Models, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sengul Avsar, Asiye; Tavsancil, Ezel – Educational Sciences: Theory and Practice, 2017
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Descriptors: Test Items, Psychometrics, Nonparametric Statistics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M. – Sociological Methods & Research, 2016
This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…
Descriptors: Likert Scales, Item Analysis, Factor Analysis, Comparative Analysis
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  24