Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 37 |
Since 2006 (last 20 years) | 41 |
Descriptor
Educational Research | 41 |
Program Effectiveness | 41 |
Randomized Controlled Trials | 41 |
Intervention | 23 |
Program Evaluation | 12 |
Research Methodology | 12 |
Quasiexperimental Design | 8 |
Research Design | 8 |
Computation | 7 |
Effect Size | 6 |
Elementary Secondary Education | 6 |
More ▼ |
Source
Author
Schochet, Peter Z. | 4 |
May, Henry | 3 |
Anna Erickson | 2 |
Connolly, Paul | 2 |
Heather C. Hill | 2 |
Higgins, Steve | 2 |
Kasim, Adetayo | 2 |
Kelcey, Ben | 2 |
Singh, Akansha | 2 |
Uwimpuhwe, Germaine | 2 |
Ahn, Jee Bin | 1 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 2 |
Policymakers | 1 |
Practitioners | 1 |
Location
United Kingdom (England) | 4 |
Illinois | 1 |
Indiana | 1 |
Liberia | 1 |
Massachusetts (Boston) | 1 |
North Carolina | 1 |
Oregon | 1 |
Pennsylvania (Philadelphia) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Dynamic Indicators of Basic… | 1 |
Indiana Statewide Testing for… | 1 |
TerraNova Multiple Assessments | 1 |
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 2 |
Meets WWC Standards with or without Reservations | 2 |
Edmunds, Julie A.; Gicheva, Dora; Thrift, Beth; Hull, Marie – Journal of Mixed Methods Research, 2022
Randomized controlled trials (RCTs) in education are common as the design allows for an unbiased estimate of the overall impact of a program. As more RCTs are completed, researchers are also noting that an overall average impact may mask substantial variation across sites or groups of individuals. Mixed methods can provide insight and help in…
Descriptors: Randomized Controlled Trials, Mixed Methods Research, Educational Research, Online Courses
Brown, Seth; Song, Mengli; Cook, Thomas D.; Garet, Michael S. – American Educational Research Journal, 2023
This study examined bias reduction in the eight nonequivalent comparison group designs (NECGDs) that result from combining (a) choice of a local versus non-local comparison group, and analytic use or not of (b) a pretest measure of the study outcome and (c) a rich set of other covariates. Bias was estimated as the difference in causal estimate…
Descriptors: Research Design, Pretests Posttests, Computation, Bias
Anamarie A. Whitaker; Margaret Burchinal; Jade M. Jenkins; Drew H. Bailey; Tyler W. Watts; Greg J. Duncan; Emma R. Hart; Ellen Peisner-Feinberg – Annenberg Institute for School Reform at Brown University, 2024
High-quality preschool programs are heralded as an effective policy tool to promote the development and life-long wellbeing of children from low-income families. Yet evaluations of recent preschool programs produce puzzling findings, including negative impacts, and divergent, weaker results than were shown in demonstration programs implemented in…
Descriptors: Preschool Education, Program Effectiveness, Educational Quality, Educational Research
Uwimpuhwe, Germaine; Singh, Akansha; Higgins, Steve; Kasim, Adetayo – International Journal of Research & Method in Education, 2021
Educational researchers advocate the use of an effect size and its confidence interval to assess the effectiveness of interventions instead of relying on a p-value, which has been blamed for lack of reproducibility of research findings and the misuse of statistics. The aim of this study is to provide a framework, which can provide direct evidence…
Descriptors: Educational Research, Randomized Controlled Trials, Bayesian Statistics, Effect Size
Sam Sims; Jake Anders; Matthew Inglis; Hugues Lortie-Forgues; Ben Styles; Ben Weidmann – Annenberg Institute for School Reform at Brown University, 2023
Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of…
Descriptors: Educational Research, Research Methodology, Randomized Controlled Trials, Program Effectiveness
What Works Clearinghouse, 2021
The What Works Clearinghouse (WWC) identifies existing research on educational interventions, assesses the quality of the research, and summarizes and disseminates the evidence from studies that meet WWC standards. The WWC aims to provide enough information so educators can use the research to make informed decisions in their settings. This…
Descriptors: Program Effectiveness, Intervention, Educational Research, Educational Quality
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Jacob, Robin T.; Doolittle, Fred; Kemple, James; Somers, Marie-Andrée – Educational Researcher, 2019
A substantial number of randomized trials of educational interventions that have been conducted over the past two decades have produced null results, with either no impact or an unreliable estimate of impact on student achievement or other outcomes of interest. The investment of time and money spent implementing such trials warrants more useful…
Descriptors: Intervention, Randomized Controlled Trials, Educational Research, Program Effectiveness
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Spybrook, Jessaca; Zhang, Qi; Kelcey, Ben; Dong, Nianbo – Educational Evaluation and Policy Analysis, 2020
Over the past 15 years, we have seen an increase in the use of cluster randomized trials (CRTs) to test the efficacy of educational interventions. These studies are often designed with the goal of determining whether a program works, or answering the what works question. Recently, the goals of these studies expanded to include for whom and under…
Descriptors: Randomized Controlled Trials, Educational Research, Program Effectiveness, Intervention
Timothy Lycurgus; Ben B. Hansen – Society for Research on Educational Effectiveness, 2022
Background: Efficacy trials in education often possess a motivating theory of change: how and why should the desired improvement in outcomes occur as a consequence of the intervention? In scenarios with repeated measurements, certain subgroups may be more or less likely to manifest a treatment effect; the theory of change (TOC) provides guidance…
Descriptors: Educational Change, Educational Research, Intervention, Efficiency
Zhao, Yong – Journal of Educational Change, 2017
Medical research is held as a field for education to emulate. Education researchers have been urged to adopt randomized controlled trials, a more "scientific" research method believed to have resulted in the advances in medicine. But a much more important lesson education needs to borrow from medicine has been ignored. That is the study…
Descriptors: Educational Research, Medical Research, Randomized Controlled Trials, Program Effectiveness
Cameron, Ewan – Journal of Education Policy, 2020
In the 2016-2017 school year, the Liberian government launched Partnership Schools For Liberia (PSL), a pilot program in which the management of 93 primary schools was transferred to 8 private contractors. The pilot owed much to the importation of western policy models and was facilitated by the British organisation ARK and involved BIA, a private…
Descriptors: Foreign Countries, Partnerships in Education, Privatization, Democracy
Heather C. Hill; Anna Erickson – Educational Researcher, 2019
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Implementation, Program Effectiveness, Intervention
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2018
Design-based methods have recently been developed as a way to analyze randomized controlled trial (RCT) data for designs with a single treatment and control group. This article builds on this framework to develop design-based estimators for evaluations with multiple research groups. Results are provided for a wide range of designs used in…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups