Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 26 |
Since 2016 (last 10 years) | 150 |
Since 2006 (last 20 years) | 224 |
Descriptor
Source
Author
Gorard, Stephen | 8 |
See, Beng Huat | 8 |
Siddiqui, Nadia | 8 |
Demack, Sean | 7 |
Stevens, Anna | 7 |
Styles, Ben | 7 |
Maxwell, Bronwen | 6 |
Torgerson, Carole | 6 |
Burkander, Paul | 5 |
Chiang, Hanley | 5 |
Hallgren, Kristin | 5 |
More ▼ |
Publication Type
Education Level
Audience
Policymakers | 3 |
Practitioners | 2 |
Researchers | 2 |
Location
United Kingdom (England) | 58 |
Florida | 5 |
United Kingdom (London) | 5 |
California | 4 |
New York (New York) | 4 |
Tennessee | 4 |
United Kingdom (Manchester) | 4 |
Australia | 3 |
Illinois | 3 |
Louisiana | 3 |
Massachusetts | 3 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 7 |
Meets WWC Standards with or without Reservations | 11 |
Does not meet standards | 5 |
Fein, David; Maynard, Rebecca A. – Grantee Submission, 2022
In 2015, Abt Associates received a grant from the Institutes for Education Sciences (IES) for a five-year "Development and Innovation" study of PTC. The purposes of the study were to gauge progress in implementing PTC and to develop and test improvements where needed. Fein et al. (2020) summarize the IES study's approach and findings. A…
Descriptors: Program Evaluation, Program Implementation, Program Improvement, College Students
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2018
Among econometricians, instrumental variable (IV) estimation is a commonly used technique to estimate the causal effect of a particular variable on a specified outcome. However, among applied researchers in the social sciences, IV estimation may not be well understood. Although there are several IV estimation primers from different fields, most…
Descriptors: Computation, Statistical Analysis, Compliance (Psychology), Randomized Controlled Trials
Shrubsole, Kirstine; Rogers, Kris; Power, Emma – International Journal of Language & Communication Disorders, 2022
Background: While implementation studies in aphasia management have shown promising improvements to clinical practice, it is currently unknown if aphasia implementation outcomes are sustained and what factors may influence clinical sustainability. Aims: To evaluate the sustainment (i.e., sustained improvement of aphasia management practices and…
Descriptors: Speech Language Pathology, Allied Health Personnel, Aphasia, Program Implementation
Thomas Archibald – Journal of Human Sciences & Extension, 2019
The debate over what counts as credible evidence often occurs on a methodological level (i.e., about what technical applications of systematic inquiry provide believable, justifiable claims about a program). Less often, it occurs on an epistemological level (i.e., about what ways of knowing are appropriate for making claims about a program). Even…
Descriptors: Extension Education, Credibility, Evidence, Epistemology
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M. – American Journal of Evaluation, 2018
Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…
Descriptors: Randomized Controlled Trials, Program Evaluation, Program Effectiveness, Family Violence
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
There are a growing number of large-scale educational randomized controlled trials (RCTs). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the UK-based Education Endowment Foundation and the U.S.-based…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Program Evaluation
Larry L. Orr; Robert B. Olsen; Stephen H. Bell; Ian Schmid; Azim Shivji; Elizabeth A. Stuart – Journal of Policy Analysis and Management, 2019
Evidence-based policy at the local level requires predicting the impact of an intervention to inform whether it should be adopted. Increasingly, local policymakers have access to published research evaluating the effectiveness of policy interventions from national research clearinghouses that review and disseminate evidence from program…
Descriptors: Educational Policy, Evidence Based Practice, Intervention, Decision Making
Britt, Jessica; Fein, David; Maynard, Rebecca; Warfield, Garrett – Grantee Submission, 2021
This case study describes a small randomized controlled trial (RCT) comparing alternative strategies for monitoring and supporting academic achievement in Year Up's Professional Training Corps (PTC) program. Year Up is a nonprofit organization dedicated to preparing economically disadvantaged young adults for well-paying jobs with advancement…
Descriptors: Randomized Controlled Trials, Program Evaluation, Academic Achievement, Career Development
Heather C. Hill; Anna Erickson – Educational Researcher, 2019
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Implementation, Program Effectiveness, Intervention
Norwich, Brahm; Koutsouris, George – International Journal of Research & Method in Education, 2020
This paper describes the context, processes and issues experienced over 5 years in which a RCT was carried out to evaluate a programme for children aged 7-8 who were struggling with their reading. Its specific aim is to illuminate questions about the design of complex teaching approaches and their evaluation using an RCT. This covers the early…
Descriptors: Randomized Controlled Trials, Program Evaluation, Reading Programs, Educational Research