Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 19 |
Descriptor
Source
| Society for Research on… | 19 |
Author
| Dong, Nianbo | 4 |
| Kelcey, Ben | 4 |
| Spybrook, Jessaca | 3 |
| Al Otaiba, Stephanie | 1 |
| Anderson, Kaitlin P. | 1 |
| Andrew Ho | 1 |
| Bayer, Amanda | 1 |
| Bellinger, Jill | 1 |
| Ben Domingue | 1 |
| Bradshaw, Catherine P. | 1 |
| Cheng, Weiyi | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 19 |
| Numerical/Quantitative Data | 1 |
Education Level
| Elementary Education | 5 |
| Secondary Education | 3 |
| Early Childhood Education | 2 |
| Elementary Secondary Education | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Grade 3 | 1 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Grade 6 | 1 |
| Grade 7 | 1 |
| More ▼ | |
Audience
| Researchers | 2 |
| Policymakers | 1 |
Location
| North Carolina | 2 |
| Arizona | 1 |
| Arkansas | 1 |
| Colombia | 1 |
| Colorado | 1 |
| District of Columbia | 1 |
| Illinois | 1 |
| India | 1 |
| Kentucky | 1 |
| Kenya | 1 |
| Louisiana | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Dynamic Indicators of Basic… | 1 |
| Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
Ishita Ahmed; Masha Bertling; Lijin Zhang; Andrew Ho; Prashant Loyalka; Scott Rozelle; Ben Domingue – Society for Research on Educational Effectiveness, 2023
Background: Evidence from education randomized controlled trials (RCTs) in low- and middle-income countries (LMICs) demonstrates how interventions can improve children's educational achievement [1, 2, 3, 4]. RCTs assess the impact of an intervention by comparing outcomes--aggregate test scores--between treatment and control groups. A review of…
Descriptors: Randomized Controlled Trials, Educational Research, Outcome Measures, Research Design
Justin Boutilier; Jonas Jonasson; Hannah Li; Erez Yoeli – Society for Research on Educational Effectiveness, 2024
Background: Randomized controlled trials (RCTs), or experiments, are the gold standard for intervention evaluation. However, the main appeal of RCTs--the clean identification of causal effects--can be compromised by interference, when one subject's actions can influence another subject's behavior or outcomes. In this paper, we formalize and study…
Descriptors: Randomized Controlled Trials, Intervention, Mathematical Models, Interference (Learning)
Winnie Wing-Yee Tse; Hok Chio Lai – Society for Research on Educational Effectiveness, 2021
Background: Power analysis and sample size planning are key components in designing cluster randomized trials (CRTs), a common study design to test treatment effect by randomizing clusters or groups of individuals. Sample size determination in two-level CRTs requires knowledge of more than one design parameter, such as the effect size and the…
Descriptors: Sample Size, Bayesian Statistics, Randomized Controlled Trials, Research Design
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to present results of recent advances in power analyses to detect the moderator effects in Cluster Randomized Trials (CRTs). This paper focus on demonstration of the software PowerUp!-Moderator. This paper provides a resource for researchers seeking to design CRTs with adequate power to detect the moderator effects of…
Descriptors: Computer Software, Research Design, Randomized Controlled Trials, Statistical Analysis
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Effect Size, Computation
Rhoads, Christopher – Society for Research on Educational Effectiveness, 2016
Current practice for conducting power analyses in hierarchical trials using survey based ICC and effect size estimates may be misestimating power because ICCs are not being adjusted to account for treatment effect heterogeneity. Results presented in Table 1 show that the necessary adjustments can be quite large or quite small. Furthermore, power…
Descriptors: Statistical Analysis, Correlation, Effect Size, Surveys
Shakeel, M. Danish; Anderson, Kaitlin P.; Wolf, Patrick J. – Society for Research on Educational Effectiveness, 2016
The objective of this meta-analysis is to rigorously assess the participant effects of private school vouchers, or in other words, to estimate the average academic impacts that the offer (or use) of a voucher has on a student. This review adds to the literature by being the first to systematically review all Randomized Control Trials (RCTs) in an…
Descriptors: Educational Vouchers, Private Schools, Meta Analysis, Program Effectiveness
VanHoudnos, Nathan – Society for Research on Educational Effectiveness, 2016
Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…
Descriptors: Effect Size, Randomized Controlled Trials, Educational Experiments, Educational Research
Dong, Nianbo; Reinke, Wendy M.; Herman, Keith C.; Bradshaw, Catherine P.; Murray, Desiree W. – Society for Research on Educational Effectiveness, 2015
Cluster randomized experiments are now widely used to examine intervention effects in prevention science. It is meaningful to use empirical benchmarks for interpreting effect size in prevention science. The effect size (i.e., the standardized mean difference, calculated by the difference of the means between the treatment and control groups,…
Descriptors: Effect Size, Correlation, Multivariate Analysis, Statistical Analysis
Hedberg, E. C.; Hedges, Larry – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to showcase new research that seeks to provide guidance on the heterogeneity of treatment effects by utilizing the variance of demographic differences in state assessments. This study is focused on a simple randomized block design where students are nested within schools, and within each school students are randomized…
Descriptors: Databases, Randomized Controlled Trials, Educational Research, Research Design
Dong, Nianbo – Society for Research on Educational Effectiveness, 2014
For intervention studies involving binary treatment variables, procedures for power analysis have been worked out and computerized estimation tools are generally available. The purpose of this study is to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval,…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2014
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in the evaluation of educational programs, policies, and practices. The website for the National Center for Education Evaluation and Regional Assistance (NCEE) reveals they have launched over 30…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Bayer, Amanda; Grossman, Jean; DuBois, David – Society for Research on Educational Effectiveness, 2015
Prior research on mentoring relationships outside of school does point toward relationship closeness and related indicators of the emotional quality of the mentor-protégé tie as important influences on youth outcomes. There is preliminary evidence that this may also be the case for School Based Mentoring (SBM), or at least that closeness promotes…
Descriptors: Mentors, Volunteers, Academic Achievement, Outcomes of Education
DiPerna, James C.; Lei, Puiwa; Bellinger, Jill; Cheng, Weiyi – Society for Research on Educational Effectiveness, 2014
Teaching children to get along with others, care about themselves, and actively participate in learning are three of the most important outcomes of the schooling process. Yet children in some schools are not achieving these outcomes, and many educators have not received adequate training to create instructional environments that facilitate these…
Descriptors: Interpersonal Competence, Improvement Programs, Intervention, Program Effectiveness
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
