NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Descriptive27
Journal Articles19
Guides - General1
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
William Herbert Yeaton – International Journal of Research & Method in Education, 2024
Though previously unacknowledged, a SMART (Sequential Multiple Assignment Randomized Trial) design uses both regression discontinuity (RD) and randomized controlled trial (RCT) designs. This combination structure creates a conceptual symbiosis between the two designs that enables both RCT- and previously unrecognized, RD-based inferential claims.…
Descriptors: Research Design, Randomized Controlled Trials, Regression (Statistics), Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Kyle Cox; Ben Kelcey; Hannah Luce – Journal of Experimental Education, 2024
Comprehensive evaluation of treatment effects is aided by considerations for moderated effects. In educational research, the combination of natural hierarchical structures and prevalence of group-administered or shared facilitator treatments often produces three-level partially nested data structures. Literature details planning strategies for a…
Descriptors: Randomized Controlled Trials, Monte Carlo Methods, Hierarchical Linear Modeling, Educational Research
Edmunds, Julie A.; Gicheva, Dora; Thrift, Beth; Hull, Marie – Journal of Mixed Methods Research, 2022
Randomized controlled trials (RCTs) in education are common as the design allows for an unbiased estimate of the overall impact of a program. As more RCTs are completed, researchers are also noting that an overall average impact may mask substantial variation across sites or groups of individuals. Mixed methods can provide insight and help in…
Descriptors: Randomized Controlled Trials, Mixed Methods Research, Educational Research, Online Courses
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, John F. – Educational Research and Evaluation, 2022
This paper discusses adapting Churches' approach to large-scale teacher/researcher conceptual replications of major "science of learning" findings, to increase teachers' engagement with empirical research on, and building research networks for, gathering data on the science of learning. The project here demonstrated the feasibility of…
Descriptors: Replication (Evaluation), Educational Research, Randomized Controlled Trials, Outcome Measures
Edovald, Triin; Nevill, Camilla – ECNU Review of Education, 2021
Purpose: This article gives an overview of the successes and lessons learned to date of the Education Endowment Foundation (EEF), one of the leading organizations of the What Works movement. Design/Approach/Methods: Starting with its history, this article covers salient components of the EEF's unique journey including lessons learned and…
Descriptors: Foreign Countries, Philanthropic Foundations, Educational Research, Evidence Based Practice
Sam Sims; Jake Anders; Matthew Inglis; Hugues Lortie-Forgues; Ben Styles; Ben Weidmann – Annenberg Institute for School Reform at Brown University, 2023
Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of…
Descriptors: Educational Research, Research Methodology, Randomized Controlled Trials, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2021
The What Works Clearinghouse (WWC) identifies existing research on educational interventions, assesses the quality of the research, and summarizes and disseminates the evidence from studies that meet WWC standards. The WWC aims to provide enough information so educators can use the research to make informed decisions in their settings. This…
Descriptors: Program Effectiveness, Intervention, Educational Research, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Jacob, Robin T.; Doolittle, Fred; Kemple, James; Somers, Marie-Andrée – Educational Researcher, 2019
A substantial number of randomized trials of educational interventions that have been conducted over the past two decades have produced null results, with either no impact or an unreliable estimate of impact on student achievement or other outcomes of interest. The investment of time and money spent implementing such trials warrants more useful…
Descriptors: Intervention, Randomized Controlled Trials, Educational Research, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Norwich, Brahm; Koutsouris, George – International Journal of Research & Method in Education, 2020
This paper describes the context, processes and issues experienced over 5 years in which a RCT was carried out to evaluate a programme for children aged 7-8 who were struggling with their reading. Its specific aim is to illuminate questions about the design of complex teaching approaches and their evaluation using an RCT. This covers the early…
Descriptors: Randomized Controlled Trials, Program Evaluation, Reading Programs, Educational Research
Sales, Adam C.; Hansen, Ben B. – Journal of Educational and Behavioral Statistics, 2020
Conventionally, regression discontinuity analysis contrasts a univariate regression's limits as its independent variable, "R," approaches a cut point, "c," from either side. Alternative methods target the average treatment effect in a small region around "c," at the cost of an assumption that treatment assignment,…
Descriptors: Regression (Statistics), Computation, Statistical Inference, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Taber, Keith S. – Studies in Science Education, 2019
Experimental studies are often employed to test the effectiveness of teaching innovations such as new pedagogy, curriculum, or learning resources. This article offers guidance on good practice in developing research designs, and in drawing conclusions from published reports. Random control trials potentially support the use of statistical…
Descriptors: Instructional Innovation, Educational Research, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Joyce, Kathryn E.; Cartwright, Nancy – American Educational Research Journal, 2020
This article addresses the gap between what works in research and what works in practice. Currently, research in evidence-based education policy and practice focuses on randomized controlled trials. These can support causal ascriptions ("It worked") but provide little basis for local effectiveness predictions ("It will work…
Descriptors: Theory Practice Relationship, Educational Policy, Evidence Based Practice, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
"Attrition" is the loss of sample during the course of a study. It occurs when individuals initially randomly assigned in a study are not included when researchers examine the outcome of interest. Attrition is a common issue in education research, and it occurs for many reasons. The What Works Clearinghouse (WWC) is an initiative of the…
Descriptors: Attrition (Research Studies), Control Groups, Experimental Groups, Randomized Controlled Trials
Previous Page | Next Page »
Pages: 1  |  2