Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 4 |
Descriptor
Source
| Educational Researcher | 4 |
Author
| Cheung, Alan C. K. | 1 |
| Inglis, Matthew | 1 |
| Kraft, Matthew A. | 1 |
| Lortie-Forgues, Hugues | 1 |
| Simpson, Adrian | 1 |
| Slavin, Robert E. | 1 |
Publication Type
| Journal Articles | 4 |
| Reports - Evaluative | 2 |
| Reports - Research | 2 |
| Information Analyses | 1 |
Audience
Location
| United Kingdom | 1 |
| United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kraft, Matthew A. – Educational Researcher, 2023
It is a healthy exercise to debate the merits of using effect-size benchmarks to interpret research findings. However, these debates obscure a more central insight that emerges from empirical distributions of effect-size estimates in the literature: Efforts to improve education often fail to move the needle. I find that 36% of effect sizes from…
Descriptors: Effect Size, Benchmarking, Educational Research, Educational Policy
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
There are a growing number of large-scale educational randomized controlled trials (RCTs). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the UK-based Education Endowment Foundation and the U.S.-based…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Program Evaluation
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Cheung, Alan C. K.; Slavin, Robert E. – Educational Researcher, 2016
As evidence becomes increasingly important in educational policy, it is essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. A total of 645 studies from 12 recent reviews of evaluations of preschool, reading, mathematics, and science programs were studied. Effect…
Descriptors: Effect Size, Research Methodology, Research Design, Preschool Evaluation

Peer reviewed
Direct link
