Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 43 |
Since 2006 (last 20 years) | 104 |
Descriptor
Computation | 110 |
Effect Size | 110 |
Statistical Analysis | 110 |
Sample Size | 35 |
Meta Analysis | 30 |
Research Design | 25 |
Educational Research | 24 |
Comparative Analysis | 19 |
Correlation | 19 |
Intervention | 17 |
Sampling | 16 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 87 |
Reports - Research | 67 |
Reports - Descriptive | 19 |
Information Analyses | 12 |
Reports - Evaluative | 11 |
Guides - Non-Classroom | 6 |
Dissertations/Theses -… | 4 |
Opinion Papers | 2 |
Speeches/Meeting Papers | 2 |
Books | 1 |
Education Level
Audience
Researchers | 5 |
Students | 1 |
Location
Netherlands | 2 |
Asia | 1 |
Canada (Montreal) | 1 |
Europe | 1 |
Hawaii | 1 |
Hong Kong | 1 |
Illinois | 1 |
Indiana | 1 |
Nebraska | 1 |
New Zealand | 1 |
Pennsylvania | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 2 |
National Assessment of… | 2 |
Attitudes Toward Women Scale | 1 |
Measures of Academic Progress | 1 |
Program for International… | 1 |
Self Description Questionnaire | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Journal of Educational and Behavioral Statistics, 2025
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Grantee Submission, 2024
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Bulus, Metin – Journal of Research on Educational Effectiveness, 2022
Although Cattaneo et al. (2019) provided a data-driven framework for power computations for Regression Discontinuity Designs in line with rdrobust Stata and R commands, which allows higher-order functional forms for the score variable when using the non-parametric local polynomial estimation, analogous advancements in their parametric estimation…
Descriptors: Effect Size, Computation, Regression (Statistics), Statistical Analysis
Joo, Seang-Hwane; Wang, Yan; Ferron, John; Beretvas, S. Natasha; Moeyaert, Mariola; Van Den Noortgate, Wim – Journal of Educational and Behavioral Statistics, 2022
Multiple baseline (MB) designs are becoming more prevalent in educational and behavioral research, and as they do, there is growing interest in combining effect size estimates across studies. To further refine the meta-analytic methods of estimating the effect, this study developed and compared eight alternative methods of estimating intervention…
Descriptors: Meta Analysis, Effect Size, Computation, Statistical Analysis
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Larry V. Hedges; William R. Shadish; Prathiba Natesan Batley – Grantee Submission, 2022
Currently the design standards for single case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB)[superscript k] designs with multiple…
Descriptors: Statistical Analysis, Research Design, Computation, Case Studies
Prathiba Natesan Batley; Madhav Thamaran; Larry Vernon Hedges – Grantee Submission, 2023
Single case experimental designs are an important research design in behavioral and medical research. Although there are design standards prescribed by the What Works Clearinghouse for single case experimental designs, these standards do not include statistically derived power computations. Recently we derived the equations for computing power for…
Descriptors: Calculators, Computer Oriented Programs, Computation, Research Design
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Bonett, Douglas G.; Price, Robert M., Jr. – Journal of Educational and Behavioral Statistics, 2020
In studies where the response variable is measured on a ratio scale, a ratio of means or medians provides a standardized measure of effect size that is an alternative to the popular standardized mean difference. Confidence intervals for ratios of population means and medians in independent-samples designs and paired-samples designs are proposed as…
Descriptors: Computation, Statistical Analysis, Mathematical Concepts, Effect Size
Ponce-Renova, Hector F. – Journal of New Approaches in Educational Research, 2022
This paper's objective was to teach the Equivalence Testing applied to Educational Research to emphasize recommendations and to increase quality of research. Equivalence Testing is a technique used to compare effect sizes or means of two different studies to ascertain if they would be statistically equivalent. For making accessible Equivalence…
Descriptors: Educational Research, Effect Size, Statistical Analysis, Intervals
Poom, Leo; af Wåhlberg, Anders – Research Synthesis Methods, 2022
In meta-analysis, effect sizes often need to be converted into a common metric. For this purpose conversion formulas have been constructed; some are exact, others are approximations whose accuracy has not yet been systematically tested. We performed Monte Carlo simulations where samples with pre-specified population correlations between the…
Descriptors: Meta Analysis, Effect Size, Mathematical Formulas, Monte Carlo Methods
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Rubio-Aparicio, María; López-López, José Antonio; Viechtbauer, Wolfgang; Marín-Martínez, Fulgencio; Botella, Juan; Sánchez-Meca, Julio – Journal of Experimental Education, 2020
Mixed-effects models can be used to examine the association between a categorical moderator and the magnitude of the effect size. Two approaches are available to estimate the residual between-studies variance, t[superscript 2][subscript res] --namely, separate estimation within each category of the moderator versus pooled estimation across all…
Descriptors: Meta Analysis, Effect Size, Computation, Classification
van Aert, Robbie C. M.; van Assen, Marcel A. L. M.; Viechtbauer, Wolfgang – Research Synthesis Methods, 2019
The effect sizes of studies included in a meta-analysis do often not share a common true effect size due to differences in for instance the design of the studies. Estimates of this so-called between-study variance are usually imprecise. Hence, reporting a confidence interval together with a point estimate of the amount of between-study variance…
Descriptors: Meta Analysis, Computation, Statistical Analysis, Effect Size
Walters, Glenn D. – International Journal of Social Research Methodology, 2018
As research on mediation has grown, so too has interest in identifying ways to assess the size of indirect effects in a mediation analysis. One such estimate -- the ratio of the indirect effect to the total effect (P[subscript M]) -- was tested in a sample of 21,297 children from the Early Childhood Developmental Study. Results showed that the two…
Descriptors: Effect Size, Computation, Statistical Analysis, Predictor Variables