Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 13 |
| Since 2007 (last 20 years) | 28 |
Descriptor
| Effect Size | 45 |
| Monte Carlo Methods | 45 |
| Sample Size | 45 |
| Statistical Analysis | 16 |
| Error of Measurement | 15 |
| Correlation | 13 |
| Computation | 10 |
| Meta Analysis | 9 |
| Simulation | 9 |
| Statistical Bias | 8 |
| Comparative Analysis | 7 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 29 |
| Reports - Research | 22 |
| Reports - Evaluative | 16 |
| Speeches/Meeting Papers | 9 |
| Dissertations/Theses -… | 3 |
| Guides - Non-Classroom | 3 |
| Information Analyses | 2 |
| Numerical/Quantitative Data | 1 |
Education Level
| Elementary Education | 1 |
Audience
| Researchers | 4 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Early Childhood Longitudinal… | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Journal of Experimental Education, 2022
In two-level designs, the total sample is a function of both the number of Level 2 clusters and the average number of Level 1 units per cluster. Traditional multilevel power calculations rely on either the arithmetic average or the harmonic mean when estimating the average number of Level 1 units across clusters of unbalanced size. The current…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Grantee Submission, 2021
Power in multilevel models remains an area of interest to both methodologists and substantive researchers. In two-level designs, the total sample is a function of both the number of level-2 (e.g., schools) clusters and the average number of level-1 (e.g., classrooms) units per cluster. Traditional multilevel power calculations rely on either the…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Simsek, Ahmet Salih – International Journal of Assessment Tools in Education, 2023
Likert-type item is the most popular response format for collecting data in social, educational, and psychological studies through scales or questionnaires. However, there is no consensus on whether parametric or non-parametric tests should be preferred when analyzing Likert-type data. This study examined the statistical power of parametric and…
Descriptors: Error of Measurement, Likert Scales, Nonparametric Statistics, Statistical Analysis
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Journal of Experimental Education, 2023
The piecewise latent growth models (PWLGMs) can be used to study changes in the growth trajectory of an outcome due to an event or condition, such as exposure to an intervention. When there are multiple outcomes of interest, a researcher may choose to fit a series of PWLGMs or a single parallel-process PWLGM. A comparison of these models is…
Descriptors: Growth Models, Statistical Analysis, Intervention, Comparative Analysis
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Joo, Seang-hwane; Wang, Yan; Ferron, John M. – AERA Online Paper Repository, 2017
Multiple-baseline studies provide meta-analysts the opportunity to compute effect sizes based on either within-series comparisons of treatment phase to baseline phase observations, or time specific between-series comparisons of observations from those that have started treatment to observations of those that are still in baseline. The advantage of…
Descriptors: Meta Analysis, Effect Size, Hierarchical Linear Modeling, Computation
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Cao, Mengyang; Tay, Louis; Liu, Yaowu – Educational and Psychological Measurement, 2017
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Descriptors: Monte Carlo Methods, Test Items, Test Bias, Error of Measurement
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun – Grantee Submission, 2017
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Descriptors: Statistical Analysis, Evaluation Methods, Structural Equation Models, Reliability
Jan, Show-Li; Shieh, Gwowen – Journal of Educational and Behavioral Statistics, 2014
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Descriptors: Sample Size, Statistical Analysis, Computation, Probability
Bonett, Douglas G. – Journal of Educational and Behavioral Statistics, 2015
Paired-samples designs are used frequently in educational and behavioral research. In applications where the response variable is quantitative, researchers are encouraged to supplement the results of a paired-samples t-test with a confidence interval (CI) for a mean difference or a standardized mean difference. Six CIs for standardized mean…
Descriptors: Educational Research, Sample Size, Statistical Analysis, Effect Size

Peer reviewed
Direct link
