Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 25 |
Since 2016 (last 10 years) | 61 |
Since 2006 (last 20 years) | 70 |
Descriptor
Educational Research | 70 |
Intervention | 70 |
Randomized Controlled Trials | 70 |
Program Effectiveness | 23 |
Research Design | 23 |
Program Evaluation | 17 |
Foreign Countries | 16 |
Effect Size | 12 |
Research Problems | 12 |
Statistical Analysis | 12 |
Computation | 11 |
More ▼ |
Source
Author
Deke, John | 4 |
Kautz, Tim | 4 |
Higgins, Steve | 3 |
Kasim, Adetayo | 3 |
Schochet, Peter Z. | 3 |
Singh, Akansha | 3 |
Uwimpuhwe, Germaine | 3 |
Wei, Thomas | 3 |
Ben B. Hansen | 2 |
Cartwright, Nancy | 2 |
Connolly, Paul | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 2 |
Location
United Kingdom | 6 |
United Kingdom (England) | 5 |
New York (New York) | 2 |
Arizona (Phoenix) | 1 |
Germany | 1 |
India | 1 |
Mexico | 1 |
Sweden | 1 |
Sweden (Stockholm) | 1 |
Texas (Austin) | 1 |
United Arab Emirates | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
William Herbert Yeaton – International Journal of Research & Method in Education, 2024
Though previously unacknowledged, a SMART (Sequential Multiple Assignment Randomized Trial) design uses both regression discontinuity (RD) and randomized controlled trial (RCT) designs. This combination structure creates a conceptual symbiosis between the two designs that enables both RCT- and previously unrecognized, RD-based inferential claims.…
Descriptors: Research Design, Randomized Controlled Trials, Regression (Statistics), Inferences
Debbie L. Hahs-Vaughn; Christine Depies DeStefano; Christopher D. Charles; Mary Little – American Journal of Evaluation, 2025
Randomized experiments are a strong design for establishing impact evidence because the random assignment mechanism theoretically allows confidence in attributing group differences to the intervention. Growth of randomized experiments within educational studies has been widely documented. However, randomized experiments within education have…
Descriptors: Educational Research, Randomized Controlled Trials, Research Problems, Educational Policy
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Timothy Lycurgus; Daniel Almirall – Society for Research on Educational Effectiveness, 2024
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered…
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Sims, Sam; Anders, Jake; Inglis, Matthew; Lortie-Forgues, Hugues – Journal of Research on Educational Effectiveness, 2023
Randomized controlled trials have proliferated in education, in part because they provide an unbiased estimator for the causal impact of interventions. It is increasingly recognized that many such trials in education have low power to detect an effect if indeed there is one. However, it is less well known that low powered trials tend to…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Intervention
Sandra Jo Wilson; Brian Freeman; E. C. Hedberg – Grantee Submission, 2024
As reporting of effect sizes in evaluation studies has proliferated, researchers and consumers of research need tools for interpreting or benchmarking the magnitude of those effect sizes that are relevant to the intervention, target population, and outcome measure being considered. Similarly, researchers planning education studies with social and…
Descriptors: Benchmarking, Effect Size, Meta Analysis, Statistical Analysis
Brown, Seth; Song, Mengli; Cook, Thomas D.; Garet, Michael S. – American Educational Research Journal, 2023
This study examined bias reduction in the eight nonequivalent comparison group designs (NECGDs) that result from combining (a) choice of a local versus non-local comparison group, and analytic use or not of (b) a pretest measure of the study outcome and (c) a rich set of other covariates. Bias was estimated as the difference in causal estimate…
Descriptors: Research Design, Pretests Posttests, Computation, Bias
Michael J. Weiss; Howard S. Bloom; Kriti Singh – Educational Evaluation and Policy Analysis, 2023
This article provides evidence about predictive relationships between features of community college interventions and their impacts on student progress. This evidence is based on analyses of student-level data from large-scale randomized trials of 39 (mostly) community college interventions. Specifically, the evidence consistently indicates that…
Descriptors: Community College Students, Intervention, Predictive Measurement, Randomized Controlled Trials
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Uwimpuhwe, Germaine; Singh, Akansha; Higgins, Steve; Kasim, Adetayo – International Journal of Research & Method in Education, 2021
Educational researchers advocate the use of an effect size and its confidence interval to assess the effectiveness of interventions instead of relying on a p-value, which has been blamed for lack of reproducibility of research findings and the misuse of statistics. The aim of this study is to provide a framework, which can provide direct evidence…
Descriptors: Educational Research, Randomized Controlled Trials, Bayesian Statistics, Effect Size
Troyer, Margaret – Journal of Research in Reading, 2022
Background: Randomised controlled trials (RCTs) have long been considered the gold standard in education research. Federal funds are allocated to evaluations that meet What Works Clearinghouse standards; RCT designs are required in order to meet these standards without reservations. Schools seek out interventions that are research based, in other…
Descriptors: Educational Research, Randomized Controlled Trials, Adolescents, Reading Instruction
Michael J. Weiss; Howard S. Bloom; Kriti Singh – Grantee Submission, 2022
This article provides evidence about predictive relationships between features of community college interventions and their impacts on student progress. This evidence is based on analyses of student-level data from large-scale randomized trials of 39 (mostly) community college interventions. Specifically, the evidence consistently indicates that…
Descriptors: Community College Students, Intervention, Predictive Measurement, Randomized Controlled Trials
What Works Clearinghouse, 2021
The What Works Clearinghouse (WWC) identifies existing research on educational interventions, assesses the quality of the research, and summarizes and disseminates the evidence from studies that meet WWC standards. The WWC aims to provide enough information so educators can use the research to make informed decisions in their settings. This…
Descriptors: Program Effectiveness, Intervention, Educational Research, Educational Quality
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials