NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2021
Background: When RCTs are not feasible and time series data are available, panel data methods can be used to estimate treatment effects on outcomes, by exploiting variation in policies and conditions over time and across locations. A complication with these methods, however, is that treatment timing often varies across the sample, for example, due…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy – Society for Research on Educational Effectiveness, 2015
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Descriptors: Generalization, Program Effectiveness, Sample Size, Computation