NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sims, Sam; Anders, Jake; Inglis, Matthew; Lortie-Forgues, Hugues – Journal of Research on Educational Effectiveness, 2023
Randomized controlled trials have proliferated in education, in part because they provide an unbiased estimator for the causal impact of interventions. It is increasingly recognized that many such trials in education have low power to detect an effect if indeed there is one. However, it is less well known that low powered trials tend to…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Watts, Tyler W.; Bailey, Drew H.; Li, Chen – Journal of Research on Educational Effectiveness, 2019
Theories regarding the long-term effects of educational interventions are often assumed, but rarely tested using experimental methods. In the following commentary, we argue that the shortage of randomized control trials with long-term follow-up presents serious problems for the field, as it hampers our ability to develop educational programs that…
Descriptors: Educational Research, Longitudinal Studies, Educational Assessment, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Larry V. Hedges – Journal of Research on Educational Effectiveness, 2018
The scientific rigor of education research has improved dramatically since the year 2000. Much of the credit for this improvement is deserved by Institute of Education Sciences (IES) policies that helped create a demand for rigorous research; increased human capital capacity to carry out such work; provided funding for the work itself; and…
Descriptors: Educational Research, Generalization, Intervention, Human Capital
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Brunner, Martin; Keller, Ulrich; Wenger, Marina; Fischbach, Antoine; Lüdtke, Oliver – Journal of Research on Educational Effectiveness, 2018
To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these…
Descriptors: Academic Achievement, Student Motivation, Psychological Patterns, Learning Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy – Journal of Research on Educational Effectiveness, 2017
Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…
Descriptors: Generalization, Inferences, Probability, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2016
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Descriptors: Educational Research, Research Design, Intervention, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis