NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20252
Since 202411
Since 2021 (last 5 years)41
Since 2016 (last 10 years)100
Since 2006 (last 20 years)233
Laws, Policies, & Programs
What Works Clearinghouse Rating
Meets WWC Standards with or without Reservations1
Showing 1 to 15 of 233 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Journal of Educational and Behavioral Statistics, 2025
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ricca, Bernard P.; Blaine, Bruce E. – Journal of Experimental Education, 2022
Researchers are encouraged to report effect size statistics to quantify treatment effects or effects due to group differences. However, estimates of effect sizes, most commonly Cohen's "d," make assumptions about the distribution of data that are not always true. An alternative nonparametric estimate of effect size, relying on the median…
Descriptors: Nonparametric Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Jingru Zhang; James E. Pustejovsky – Society for Research on Educational Effectiveness, 2024
Background/Context: In meta-analysis examining educational intervention, characterizing heterogeneity and exploring the sources of variation in synthesized effects have become increasingly prominent areas of interest. When combining results from a collection of studies, statistical dependency among their effects size estimates will arise when a…
Descriptors: Meta Analysis, Investigations, Effect Size, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Kaitlyn G. Fitzgerald; Elizabeth Tipton – Grantee Submission, 2024
This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because samples recruited into education research studies are often more homogeneous than the populations of policy interest, the variation in educational outcomes can be smaller in these samples than…
Descriptors: Data Use, Computation, Effect Size, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Bulus, Metin – Journal of Research on Educational Effectiveness, 2022
Although Cattaneo et al. (2019) provided a data-driven framework for power computations for Regression Discontinuity Designs in line with rdrobust Stata and R commands, which allows higher-order functional forms for the score variable when using the non-parametric local polynomial estimation, analogous advancements in their parametric estimation…
Descriptors: Effect Size, Computation, Regression (Statistics), Statistical Analysis
Joo, Seang-Hwane; Wang, Yan; Ferron, John; Beretvas, S. Natasha; Moeyaert, Mariola; Van Den Noortgate, Wim – Journal of Educational and Behavioral Statistics, 2022
Multiple baseline (MB) designs are becoming more prevalent in educational and behavioral research, and as they do, there is growing interest in combining effect size estimates across studies. To further refine the meta-analytic methods of estimating the effect, this study developed and compared eight alternative methods of estimating intervention…
Descriptors: Meta Analysis, Effect Size, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Chen Sun; Stephanie Yang; Betsy Becker – Journal of Educational Computing Research, 2024
Computational thinking (CT), an essential 21st century skill, incorporates key computer science concepts such as abstraction, algorithms, and debugging. Debugging is particularly underrepresented in the CT training literature. This multi-level meta-analysis focused on debugging as a core CT skill, and investigated the effects of various debugging…
Descriptors: Troubleshooting, Computation, Thinking Skills, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Schauer, Jacob M.; Lee, Jihyun; Diaz, Karina; Pigott, Therese D. – Research Synthesis Methods, 2022
Missing covariates is a common issue when fitting meta-regression models. Standard practice for handling missing covariates tends to involve one of two approaches. In a complete-case analysis, effect sizes for which relevant covariates are missing are omitted from model estimation. Alternatively, researchers have employed the so-called…
Descriptors: Statistical Bias, Meta Analysis, Regression (Statistics), Research Problems
Larry V. Hedges; William R. Shadish; Prathiba Natesan Batley – Grantee Submission, 2022
Currently the design standards for single case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB)[superscript k] designs with multiple…
Descriptors: Statistical Analysis, Research Design, Computation, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Luke Miratrix; Ben Weidmann – Society for Research on Educational Effectiveness, 2022
Background/Context: Attrition has been described as "the Achilles Heel of the randomized experiment" (Shadish et al., 1998 p.3). Attrition looms as a threat because it can undermine group equivalence, eroding the methodological strength at the heart of a randomized evaluation. In particular, attrition could result in unobserved…
Descriptors: Educational Research, Statistical Bias, Attrition (Research Studies), Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin Topuz; Burcu Ulke-Kurkcuoglu – Review Journal of Autism and Developmental Disorders, 2022
The purpose of the study is to systematically review studies investigating the script fading procedure on individuals with autism and also determine the quality of the studies and the effect size related to the calculations of the percentage of nonoverlapping data (PND) and the percentage of data exceeding the median (PEM) in the studies. The…
Descriptors: Behavior Modification, Autism Spectrum Disorders, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Chunhua Cao; Benjamin Lugu; Jujia Li – Structural Equation Modeling: A Multidisciplinary Journal, 2024
This study examined the false positive (FP) rates and sensitivity of Bayesian fit indices to structural misspecification in Bayesian structural equation modeling. The impact of measurement quality, sample size, model size, the magnitude of misspecified path effect, and the choice or prior on the performance of the fit indices was also…
Descriptors: Structural Equation Models, Bayesian Statistics, Measurement, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan R. Van Norman; Jaclin Boorse; David A. Klingbeil – Grantee Submission, 2024
Despite the increased number of quantitative effect sizes developed for single-case experimental designs (SCEDs), visual analysis remains the gold standard for evaluating methodological rigor of SCEDs and determining whether a functional relation between the treatment and the outcome exists. The physical length and range of values plotted on x and…
Descriptors: Visual Aids, Outcomes of Education, Oral Reading, Reading Comprehension
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16