NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Sims, Sam; Anders, Jake; Inglis, Matthew; Lortie-Forgues, Hugues – Journal of Research on Educational Effectiveness, 2023
Randomized controlled trials have proliferated in education, in part because they provide an unbiased estimator for the causal impact of interventions. It is increasingly recognized that many such trials in education have low power to detect an effect if indeed there is one. However, it is less well known that low powered trials tend to…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Sandra Jo Wilson; Brian Freeman; E. C. Hedberg – Grantee Submission, 2024
As reporting of effect sizes in evaluation studies has proliferated, researchers and consumers of research need tools for interpreting or benchmarking the magnitude of those effect sizes that are relevant to the intervention, target population, and outcome measure being considered. Similarly, researchers planning education studies with social and…
Descriptors: Benchmarking, Effect Size, Meta Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Martin Brunner; Sophie E. Stallasch; Cordula Artelt; Oliver Lüdtke – Educational Psychology Review, 2025
There is a need for robust evidence about which educational interventions work in preschool to foster children's cognitive and socio-emotional learning (SEL) outcomes. Lab-based individually randomized experiments can develop and refine such interventions, and field-based randomized experiments (e.g., cluster randomized trials) evaluate their…
Descriptors: Preschools, Social Emotional Learning, Outcomes of Education, Cognitive Objectives
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Dong, Nianbo; Maynard, Rebecca A. – Journal of Educational and Behavioral Statistics, 2020
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take…
Descriptors: Statistical Analysis, Cost Effectiveness, Randomized Controlled Trials, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2018
Design-based methods have recently been developed as a way to analyze randomized controlled trial (RCT) data for designs with a single treatment and control group. This article builds on this framework to develop design-based estimators for evaluations with multiple research groups. Results are provided for a wide range of designs used in…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter – Society for Research on Educational Effectiveness, 2018
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Freedman, 2008; Lin, 2013; Imbens and Rubin, 2015; Schochet, 2013, 2016; Yang and Tsiatis, 2001). The non-parametric estimators are derived using the building blocks of experimental designs with minimal…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Claire Allen-Platt; Clara-Christina Gerstner; Robert Boruch; Alan Ruby – Society for Research on Educational Effectiveness, 2021
Background/Context: When a researcher tests an educational program, product, or policy in a randomized controlled trial (RCT) and detects a significant effect on an outcome, the intervention is usually classified as something that "works." When the expected effects are not found, however, there is seldom an orderly and transparent…
Descriptors: Educational Assessment, Randomized Controlled Trials, Evidence, Educational Research
Sales, Adam C.; Hansen, Ben B. – Journal of Educational and Behavioral Statistics, 2020
Conventionally, regression discontinuity analysis contrasts a univariate regression's limits as its independent variable, "R," approaches a cut point, "c," from either side. Alternative methods target the average treatment effect in a small region around "c," at the cost of an assumption that treatment assignment,…
Descriptors: Regression (Statistics), Computation, Statistical Inference, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Kelcey, Benjamin; Dong, Nianbo – Journal of Educational and Behavioral Statistics, 2016
Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the "what works" question. However, program effects may vary by individual characteristics or by context,…
Descriptors: Randomized Controlled Trials, Statistical Analysis, Computation, Educational Research
Previous Page | Next Page »
Pages: 1  |  2  |  3