Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 7 |
| Since 2007 (last 20 years) | 10 |
Descriptor
| Error of Measurement | 10 |
| Randomized Controlled Trials | 10 |
| Research Design | 10 |
| Statistical Analysis | 7 |
| Educational Research | 4 |
| Effect Size | 4 |
| Hierarchical Linear Modeling | 4 |
| Intervention | 3 |
| Research Problems | 3 |
| Simulation | 3 |
| Statistical Inference | 3 |
| More ▼ | |
Source
| Society for Research on… | 3 |
| Journal of Research on… | 2 |
| American Journal of Evaluation | 1 |
| Educational Research and… | 1 |
| Journal of Educational and… | 1 |
| Journal of Policy Analysis… | 1 |
| National Center for Education… | 1 |
Author
| Amota Ataneka | 1 |
| Artelt, Cordula | 1 |
| Ben Kelcey | 1 |
| Brunner, Martin | 1 |
| Cartwright, Nancy | 1 |
| Cook, Thomas D. | 1 |
| Daniel Almirall | 1 |
| Deke, John | 1 |
| Fangxing Bai | 1 |
| Greenhouse, Joel B. | 1 |
| Kautz, Tim | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 7 |
| Journal Articles | 6 |
| Reports - Evaluative | 3 |
| Information Analyses | 1 |
| Numerical/Quantitative Data | 1 |
Education Level
| Elementary Education | 1 |
| Secondary Education | 1 |
Audience
Location
| Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We consider a class of multiple-group individually-randomized group trials (IRGTs) that introduces a (partially) cross-classified structure in the treatment condition (only). The novel feature of this design is that the nature of the treatment induces a clustering structure that involves two or more non-nested groups among individuals in the…
Descriptors: Randomized Controlled Trials, Research Design, Statistical Analysis, Error of Measurement
Timothy Lycurgus; Daniel Almirall – Society for Research on Educational Effectiveness, 2024
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered…
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Cartwright, Nancy – Educational Research and Evaluation, 2019
Across the evidence-based policy and practice (EBPP) community, including education, randomised controlled trials (RCTS) rank as the most "rigorous" evidence for causal conclusions. This paper argues that that is misleading. Only narrow conclusions about study populations can be warranted with the kind of "rigour" that RCTs…
Descriptors: Evidence Based Practice, Educational Policy, Randomized Controlled Trials, Error of Measurement
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
VanHoudnos, Nathan M.; Greenhouse, Joel B. – Journal of Educational and Behavioral Statistics, 2016
When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Error of Measurement, Scaling
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems

Peer reviewed
Direct link
