NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
Peer reviewed Peer reviewed
Direct linkDirect link
Katherine Pye; Hannah Jackson; Teresa Iacono; Alan Shiell – Journal of Autism and Developmental Disorders, 2024
Many autistic children access some form of early intervention, but little is known about the value for money of different programs. We completed a scoping review of full economic evaluations of early interventions for autistic children and/or their families. We identified nine studies and reviewed their methods and quality. Most studies involved…
Descriptors: Economics, Early Intervention, Autism Spectrum Disorders, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Maynard, Rebecca A.; Baelen, Rebecca N.; Fein, David; Souvanna, Phomdaen – Grantee Submission, 2022
This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices.…
Descriptors: Program Improvement, Evaluation Methods, Efficiency, Young Adults
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This report presents findings from exploratory, descriptive meta-analyses of effect sizes reported by the first 82 EEF evaluations that used a randomised controlled trial (RCT) or clustered RCT impact evaluation design published up to January 2019. The review used a theoretical framework derived from literature with five overarching themes to…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This document summarises key findings from the quantitative strands of a review of the Education Endowment Foundation (EEF) evaluations that had reported from the establishment of EEF in 2011 up to January 2019. The quantitative strands summarised include meta-analyses of effect sizes reported for attainment outcomes and descriptive analyses of…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This document provides supplementary statistical tables to support the review of Education Endowment Foundation (EEF) evaluations. This includes: (1) Descriptive (univariate) tables for all explanatory variables; (2) Tables for the meta-analyses of primary ITT effect sizes; (3) Tables for the meta-analyses of secondary ITT effect sizes; (4) Tables…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jake Anders; Chris Brown; Melanie Ehren; Toby Greany; Rebecca Nelson; Jessica Heal; Bibi Groot; Michael Sanders; Rebecca Allen – Education Endowment Foundation, 2017
Evaluating the impact of complex whole-school interventions (CWSIs) is challenging. However, what evidence there is suggests that school leadership and other elements of whole-school contexts are important for pupils' attainment (Leithwood et al., 2006), suggesting that interventions aimed at changing these have significant potential to improve…
Descriptors: Leadership Styles, Program Implementation, Leadership Responsibility, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Stephanie M.; Barnes, Sophie P.; Bailey, Rebecca; Doolittle, Emily J. – Future of Children, 2017
There's a strong case for making social and emotional learning (SEL) skills and competencies a central feature of elementary school. Children who master SEL skills get along better with others, do better in school, and have more successful careers and better mental and physical health as adults. Evidence from the most rigorous studies of…
Descriptors: Social Development, Emotional Development, Competence, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kautz, Tim; Schochet, Peter Z.; Tilley, Charles – National Center for Education Evaluation and Regional Assistance, 2017
A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Previous Page | Next Page ยป
Pages: 1  |  2