NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Society for Research on…33
Laws, Policies, & Programs
Assessments and Surveys
Indiana Statewide Testing for…1
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ishita Ahmed; Masha Bertling; Lijin Zhang; Andrew Ho; Prashant Loyalka; Scott Rozelle; Ben Domingue – Society for Research on Educational Effectiveness, 2023
Background: Evidence from education randomized controlled trials (RCTs) in low- and middle-income countries (LMICs) demonstrates how interventions can improve children's educational achievement [1, 2, 3, 4]. RCTs assess the impact of an intervention by comparing outcomes--aggregate test scores--between treatment and control groups. A review of…
Descriptors: Randomized Controlled Trials, Educational Research, Outcome Measures, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Castillo; Lindsay Dusard – Society for Research on Educational Effectiveness, 2024
Background: The emergence of causal research in education was almost strictly quantitative twenty years ago, however, that landscape has changed considerably. The number of intervention studies fielded and completed annually has increased substantially, and the quality of the evaluations is much more robust, including paying much greater attention…
Descriptors: Randomized Controlled Trials, Educational Research, Equal Education, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Timothy Lycurgus; Daniel Almirall – Society for Research on Educational Effectiveness, 2024
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered…
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Wei Li; Walter Leite; Jia Quan – Society for Research on Educational Effectiveness, 2023
Background: Multilevel randomized controlled trials (MRCTs) have been widely used to evaluate the causal effects of educational interventions. Traditionally, educational researchers and policymakers focused on the average treatment effects (ATE) of the intervention. Recently there has been an increasing interest in evaluating the heterogeneity of…
Descriptors: Artificial Intelligence, Identification, Hierarchical Linear Modeling, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly Hallberg; Andrew Swanlund; Ryan Williams – Society for Research on Educational Effectiveness, 2021
Background: The COVID-19 pandemic and the subsequent public health response led to an unprecedented disruption in educational instruction in the U.S. and around the world. Many schools quickly moved to virtual learning for the bulk of the 2020 spring term and many states cancelled annual assessments of student learning. The 2020-21 school year…
Descriptors: Research Problems, Educational Research, Research Design, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Mark Fredrickson; Ben B. Hansen – Society for Research on Educational Effectiveness, 2021
Context: Assessments of baseline equivalency of intervention and control groups, "balance," play a critical role in evaluating educational interventions. The highest What Works Clearinghouse (WWC) of the Institute of Educational Studies (IES) standard for educational studies, "Meets WWC Design Standards Without Reservations,"…
Descriptors: Educational Research, Experimental Groups, Control Groups, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bowden, A. Brooks – Society for Research on Educational Effectiveness, 2017
Initiatives during the Bush Administration and the Obama Administration may have set the stage for a"Golden Age of evidence-based policy" (Haskins, 2015). Together these efforts stress the importance of accurate, internally valid evidence that can inform decisions to more efficiently allocate public resources. In 2002, the U.S.…
Descriptors: Research Design, Costs, Randomized Controlled Trials, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Timothy Lycurgus; Ben B. Hansen – Society for Research on Educational Effectiveness, 2022
Background: Efficacy trials in education often possess a motivating theory of change: how and why should the desired improvement in outcomes occur as a consequence of the intervention? In scenarios with repeated measurements, certain subgroups may be more or less likely to manifest a treatment effect; the theory of change (TOC) provides guidance…
Descriptors: Educational Change, Educational Research, Intervention, Efficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter – Society for Research on Educational Effectiveness, 2018
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Freedman, 2008; Lin, 2013; Imbens and Rubin, 2015; Schochet, 2013, 2016; Yang and Tsiatis, 2001). The non-parametric estimators are derived using the building blocks of experimental designs with minimal…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Claire Allen-Platt; Clara-Christina Gerstner; Robert Boruch; Alan Ruby – Society for Research on Educational Effectiveness, 2021
Background/Context: When a researcher tests an educational program, product, or policy in a randomized controlled trial (RCT) and detects a significant effect on an outcome, the intervention is usually classified as something that "works." When the expected effects are not found, however, there is seldom an orderly and transparent…
Descriptors: Educational Assessment, Randomized Controlled Trials, Evidence, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Effect Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Society for Research on Educational Effectiveness, 2017
Bayesian statistical methods have become more feasible to implement with advances in computing but are not commonly used in educational research. In contrast to frequentist approaches that take hypotheses (and the associated parameters) as fixed, Bayesian methods take data as fixed and hypotheses as random. This difference means that Bayesian…
Descriptors: Bayesian Statistics, Educational Research, Statistical Analysis, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cole, Russell; Deke, John; Seftor, Neil – Society for Research on Educational Effectiveness, 2016
The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…
Descriptors: Educational Research, Research Design, Regression (Statistics), Multivariate Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3