Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 26 |
Since 2016 (last 10 years) | 150 |
Since 2006 (last 20 years) | 224 |
Descriptor
Source
Author
Gorard, Stephen | 8 |
See, Beng Huat | 8 |
Siddiqui, Nadia | 8 |
Demack, Sean | 7 |
Stevens, Anna | 7 |
Styles, Ben | 7 |
Maxwell, Bronwen | 6 |
Torgerson, Carole | 6 |
Burkander, Paul | 5 |
Chiang, Hanley | 5 |
Hallgren, Kristin | 5 |
More ▼ |
Publication Type
Education Level
Audience
Policymakers | 3 |
Practitioners | 2 |
Researchers | 2 |
Location
United Kingdom (England) | 58 |
Florida | 5 |
United Kingdom (London) | 5 |
California | 4 |
New York (New York) | 4 |
Tennessee | 4 |
United Kingdom (Manchester) | 4 |
Australia | 3 |
Illinois | 3 |
Louisiana | 3 |
Massachusetts | 3 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 7 |
Meets WWC Standards with or without Reservations | 11 |
Does not meet standards | 5 |
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Julie Murray; Charlie Rioux; Sophie Parent; Jean R. Séguin; Michelle Pinsonneault; William D. Fraser; Natalie Castellanos-Ryan – Prevention Science, 2024
Parenting programs have been shown to be effective in preventing and reducing externalising problems in young children. Despite their efficacy, the low rate of initial parental engagement in these programs is a major challenge for clinicians and researchers. Few studies have examined factors associated with rates of initial engagement in parenting…
Descriptors: Parent Participation, Parent Education, Prevention, Child Behavior
Hansford, Nathaniel; Schechter, Rachel L. – International Journal of Modern Education Studies, 2023
Meta-analyses are systematic summaries of research that use quantitative methods to find the mean effect size (standardized mean difference) for interventions. Critics of meta-analysis point out that such analyses can conflate the results of low- and high-quality studies, make improper comparisons and result in statistical noise. All these…
Descriptors: Meta Analysis, Best Practices, Randomized Controlled Trials, Criticism
Katherine Pye; Hannah Jackson; Teresa Iacono; Alan Shiell – Journal of Autism and Developmental Disorders, 2024
Many autistic children access some form of early intervention, but little is known about the value for money of different programs. We completed a scoping review of full economic evaluations of early interventions for autistic children and/or their families. We identified nine studies and reviewed their methods and quality. Most studies involved…
Descriptors: Economics, Early Intervention, Autism Spectrum Disorders, Children
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Juan David Parra; D. Brent Edwards Jr. – Critical Studies in Education, 2024
This paper seeks to raise awareness among educational researchers and practitioners of some significant weaknesses and internal contradictions of randomised control trials (RCTs). Although critiques throughout the years from education scholars have pointed to the detrimental effects of this experimental approach on education practice and values,…
Descriptors: Randomized Controlled Trials, Evidence Based Practice, Educational Practices, Educational Policy
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Genik, Lara M.; Aerts, Elisabeth L.; Nauman, Hiba; Barney, Chantel C.; Lewis, Stephen P.; McMurtry, C. Meghan – American Journal on Intellectual and Developmental Disabilities, 2021
Within a parallel-group randomized control trial, pain training's impact on Respite Workers' (RW) care approaches and training evaluations was explored. RW (n = 158) from 14 organizations received pain or control training following randomization. Researchers were blind until randomization; allocations were not shared explicitly with…
Descriptors: Randomized Controlled Trials, Pain, Caregiver Training, Respite Care
Troyer, Margaret – Journal of Research in Reading, 2022
Background: Randomised controlled trials (RCTs) have long been considered the gold standard in education research. Federal funds are allocated to evaluations that meet What Works Clearinghouse standards; RCT designs are required in order to meet these standards without reservations. Schools seek out interventions that are research based, in other…
Descriptors: Educational Research, Randomized Controlled Trials, Adolescents, Reading Instruction
Weiss, Michael J.; Unterman, Rebecca; Biedzio, Dorota – MDRC, 2021
Some education programs' early positive effects disappear over time. Other programs have unanticipated positive long-term effects. Foundations warn of the dangers of putting too much weight on in-program effects, which, they say, often fade after a program ends. This Issue Focus tackles the topic of post-program effects in postsecondary education.…
Descriptors: Outcomes of Education, Higher Education, College Credits, Program Evaluation
Maynard, Rebecca A.; Baelen, Rebecca N.; Fein, David; Souvanna, Phomdaen – Grantee Submission, 2022
This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices.…
Descriptors: Program Improvement, Evaluation Methods, Efficiency, Young Adults
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Grantee Submission, 2022
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Ross, Stephen L.; Brunner, Eric; Rosen, Rachel – Grantee Submission, 2020
This paper considers recent efforts to conduct experimental and quasi-experimental evaluations of career and technical education programs. It focuses on understanding the counterfactual, or control population, for these program evaluations, discussing how the educational experiences of the control population might vary from those of the treated…
Descriptors: Vocational Education, Program Evaluation, Educational Experience, Regression (Statistics)
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials