NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Rrita Zejnullahi – Society for Research on Educational Effectiveness, 2021
Background: Meta-analysis is considered to be the gold standard for evidence synthesis. It involves combining data from multiple independent sources to produce a summary estimate with improved precision. Traditionally, meta-analysis methods have been applied to a large collection of studies, and past research efforts have indicated its numerous…
Descriptors: Meta Analysis, Randomized Controlled Trials, Sample Size, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Peer reviewed Peer reviewed
Direct linkDirect link
Winnie Wing-Yee Tse; Hok Chio Lai – Society for Research on Educational Effectiveness, 2021
Background: Power analysis and sample size planning are key components in designing cluster randomized trials (CRTs), a common study design to test treatment effect by randomizing clusters or groups of individuals. Sample size determination in two-level CRTs requires knowledge of more than one design parameter, such as the effect size and the…
Descriptors: Sample Size, Bayesian Statistics, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2021
Background: When RCTs are not feasible and time series data are available, panel data methods can be used to estimate treatment effects on outcomes, by exploiting variation in policies and conditions over time and across locations. A complication with these methods, however, is that treatment timing often varies across the sample, for example, due…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, COVID-19
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Spybrook, Jessaca; Kelcey, Ben; Dong, Nianbo – Society for Research on Educational Effectiveness, 2016
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in evaluation studies of educational programs. A specific type of CRT in which clusters are randomly assigned to treatment within blocks or sites, known as multisite cluster randomized trials (MSCRTs),…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, Cluster Grouping
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Society for Research on Educational Effectiveness, 2017
Bayesian statistical methods have become more feasible to implement with advances in computing but are not commonly used in educational research. In contrast to frequentist approaches that take hypotheses (and the associated parameters) as fixed, Bayesian methods take data as fixed and hypotheses as random. This difference means that Bayesian…
Descriptors: Bayesian Statistics, Educational Research, Statistical Analysis, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo – Society for Research on Educational Effectiveness, 2014
For intervention studies involving binary treatment variables, procedures for power analysis have been worked out and computerized estimation tools are generally available. The purpose of this study is to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval,…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2014
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in the evaluation of educational programs, policies, and practices. The website for the National Center for Education Evaluation and Regional Assistance (NCEE) reveals they have launched over 30…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy – Society for Research on Educational Effectiveness, 2015
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Descriptors: Generalization, Program Effectiveness, Sample Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
May, Henry; Sirinides, Philip; Gray, Abby; Davila, Heather Goldsworthy; Sam, Cecile; Blalock, Toscha; Blackman, Horatio; Anderson-Clark, Helen; Schiera, Andrew J. – Society for Research on Educational Effectiveness, 2015
As part of the 2010 economic stimulus, a $55 million "Investing in Innovation" (i3) grant from the US Department of Education was awarded to scale up Reading Recovery across the nation. This paper presents the final round of results from the large-scale, mixed methods randomized evaluation of the implementation and impacts of Reading…
Descriptors: Reading Programs, Program Evaluation, Reading Achievement, Mixed Methods Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy