Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 83 |
Since 2006 (last 20 years) | 139 |
Descriptor
Program Effectiveness | 139 |
Program Evaluation | 139 |
Randomized Controlled Trials | 139 |
Intervention | 95 |
Foreign Countries | 71 |
Statistical Analysis | 60 |
Comparative Analysis | 48 |
Elementary School Students | 42 |
Interviews | 40 |
Pretests Posttests | 37 |
Secondary School Students | 29 |
More ▼ |
Source
Author
Gorard, Stephen | 7 |
See, Beng Huat | 7 |
Siddiqui, Nadia | 7 |
Styles, Ben | 6 |
Torgerson, Carole | 6 |
Burkander, Paul | 5 |
Chiang, Hanley | 5 |
Hallgren, Kristin | 5 |
Herrmann, Mariesa | 5 |
Speroni, Cecilia | 5 |
Wellington, Alison | 5 |
More ▼ |
Publication Type
Education Level
Audience
Policymakers | 1 |
Practitioners | 1 |
Location
United Kingdom (England) | 47 |
Florida | 5 |
United Kingdom (London) | 5 |
United Kingdom (Manchester) | 4 |
Australia | 3 |
California | 3 |
Louisiana | 3 |
Massachusetts | 3 |
Pennsylvania | 3 |
Tennessee | 3 |
Arizona | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 5 |
Meets WWC Standards with or without Reservations | 9 |
Does not meet standards | 2 |
Weiss, Michael J.; Unterman, Rebecca; Biedzio, Dorota – MDRC, 2021
Some education programs' early positive effects disappear over time. Other programs have unanticipated positive long-term effects. Foundations warn of the dangers of putting too much weight on in-program effects, which, they say, often fade after a program ends. This Issue Focus tackles the topic of post-program effects in postsecondary education.…
Descriptors: Outcomes of Education, Higher Education, College Credits, Program Evaluation
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Thomas Archibald – Journal of Human Sciences & Extension, 2019
The debate over what counts as credible evidence often occurs on a methodological level (i.e., about what technical applications of systematic inquiry provide believable, justifiable claims about a program). Less often, it occurs on an epistemological level (i.e., about what ways of knowing are appropriate for making claims about a program). Even…
Descriptors: Extension Education, Credibility, Evidence, Epistemology
Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M. – American Journal of Evaluation, 2018
Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…
Descriptors: Randomized Controlled Trials, Program Evaluation, Program Effectiveness, Family Violence
Heather C. Hill; Anna Erickson – Educational Researcher, 2019
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Implementation, Program Effectiveness, Intervention
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Yoon, HyeonJin – ProQuest LLC, 2018
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment…
Descriptors: Regression (Statistics), Program Evaluation, Computation, Statistical Analysis
Yoon, HyeonJin – Grantee Submission, 2018
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment…
Descriptors: Regression (Statistics), Program Evaluation, Computation, Statistical Analysis
Koutsouris, George; Norwich, Brahm – British Educational Research Journal, 2018
This article addresses issues related to whether null randomised control trial (RCT) findings can by themselves be a secure indicator of programme failure. This is done by drawing on the findings of the evaluation of the Integrated Group Reading (IGR) programme using a number of teacher case studies. The case studies illustrate how the same…
Descriptors: Randomized Controlled Trials, National Curriculum, Educational Research, Reading Programs
Ponguta, Liliana Angelica; Issa, Ghassan; Aoudeh, Lara; Maalouf, Cosette; Nourallah, Sawsan; Khoshnood, Kaveh; Zonderman, Anna Leslie; Katsovich, Liliya; Moore, Christina; Salah, Rima; Al-Soleiti, Majd; Britto, Pia Rebello; Leckman, James Frederick – New Directions for Child and Adolescent Development, 2019
Despite the knowledge that quality early childhood development programs, including those that target parental knowledge and behaviors, are essential for ameliorating the negative effects of early-life adversity, robust analyses of their implementation and impact in highly vulnerable settings are scarce. To address this knowledge gap, we conducted…
Descriptors: Program Implementation, Foreign Countries, Program Evaluation, Parent Child Relationship
Outhwaite, Laura A.; Gulliford, Anthea; Pitchford, Nicola J. – International Journal of Research & Method in Education, 2020
Randomized control trials (RCTs) are commonly regarded as the 'gold standard' for evaluating educational interventions. While this experimental design is valuable in establishing causal relationships between the tested intervention and outcomes, reliance on statistical aggregation typically underplays the situated context in which interventions…
Descriptors: Randomized Controlled Trials, Intervention, Program Evaluation, Program Effectiveness
Wright, Julie A.; Whiteley, Jessica A.; Watson, Bonnie L.; Sheinfeld Gorin, Sherri N.; Hayman, Laura L. – Health Education Research, 2018
Recommendations for the prevention of childhood obesity encourage providers to counsel parents and their children on healthy diet and activity behaviors. This study evaluated the feasibility of a theory-based, tailored communication intervention for obesity prevention ("Team Up for Health") delivered during a well-child visit. A…
Descriptors: Prevention, Child Health, Obesity, Intervention