NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sales, Adam C.; Prihar, Ethan B.; Gagnon-Bartsch, Johann A.; Heffernan, Neil T. – Journal of Educational Data Mining, 2023
Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are underpowered,…
Descriptors: Data Use, Research Methodology, Randomized Controlled Trials, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Isaac M. Opper – Annenberg Institute for School Reform at Brown University, 2021
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not cause small-sample bias…
Descriptors: Randomized Controlled Trials, Sample Size, Statistical Bias, Artificial Intelligence
Gagnon-Bartsch, J. A.; Sales, A. C.; Wu, E.; Botelho, A. F.; Erickson, J. A.; Miratrix, L. W.; Heffernan, N. T. – Grantee Submission, 2019
Randomized controlled trials (RCTs) admit unconfounded design-based inference--randomization largely justifies the assumptions underlying statistical effect estimates--but often have limited sample sizes. However, researchers may have access to big observational data on covariates and outcomes from RCT non-participants. For example, data from A/B…
Descriptors: Randomized Controlled Trials, Educational Research, Prediction, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Michael J.; Lockwood, J. R.; McCaffrey, Daniel F. – Journal of Research on Educational Effectiveness, 2016
In the "individually randomized group treatment" (IRGT) experimental design, individuals are first randomly assigned to a treatment arm or a control arm, but then within each arm, are grouped together (e.g., within classrooms/schools, through shared case managers, in group therapy sessions, through shared doctors, etc.) to receive…
Descriptors: Randomized Controlled Trials, Error of Measurement, Control Groups, Experimental Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing