Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 14 |
| Since 2007 (last 20 years) | 51 |
Descriptor
| Program Evaluation | 78 |
| Sample Size | 78 |
| Program Effectiveness | 33 |
| Research Design | 25 |
| Educational Research | 22 |
| Research Methodology | 21 |
| Statistical Analysis | 21 |
| Evaluation Methods | 17 |
| Intervention | 17 |
| Comparative Analysis | 16 |
| Effect Size | 15 |
| More ▼ | |
Source
Author
| Spybrook, Jessaca | 4 |
| Deke, John | 3 |
| Slavin, Robert E. | 3 |
| Avi Feller | 2 |
| Benjamin Lu | 2 |
| Cheung, Alan C. K. | 2 |
| Eli Ben-Michael | 2 |
| Kautz, Tim | 2 |
| Kelcey, Ben | 2 |
| Luke Miratrix | 2 |
| Peck, Laura R. | 2 |
| More ▼ | |
Publication Type
Education Level
| Elementary Education | 10 |
| Elementary Secondary Education | 9 |
| Adult Education | 4 |
| Early Childhood Education | 4 |
| Grade 3 | 3 |
| Grade 4 | 3 |
| Grade 5 | 3 |
| High Schools | 3 |
| Higher Education | 3 |
| Middle Schools | 3 |
| Secondary Education | 3 |
| More ▼ | |
Audience
| Researchers | 3 |
| Policymakers | 1 |
| Practitioners | 1 |
Location
| Florida | 3 |
| United Kingdom (England) | 3 |
| Georgia | 2 |
| Massachusetts | 2 |
| Brazil | 1 |
| District of Columbia | 1 |
| Indiana | 1 |
| Minnesota | 1 |
| North Carolina | 1 |
| Oregon | 1 |
| Pennsylvania | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
| Workforce Investment Act 1998… | 1 |
Assessments and Surveys
| Iowa Tests of Basic Skills | 1 |
| Patterns of Adaptive Learning… | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
Rebecca Walcott; Isabelle Cohen; Denise Ferris – Evaluation Review, 2024
When and how to survey potential respondents is often determined by budgetary and external constraints, but choice of survey modality may have enormous implications for data quality. Different survey modalities may be differentially susceptible to measurement error attributable to interviewer assignment, known as interviewer effects. In this…
Descriptors: Surveys, Research Methodology, Error of Measurement, Interviews
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Grantee Submission, 2022
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Wolf, Rebecca; Morrison, Jennifer; Inns, Amanda; Slavin, Robert; Risman, Kelsey – Journal of Research on Educational Effectiveness, 2020
Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties.…
Descriptors: Program Evaluation, Program Effectiveness, Effect Size, Sample Size
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peck, Laura R. – American Journal of Evaluation, 2015
Several analytic strategies exist for opening up the "black box" to reveal more about what drives policy and program impacts. This article focuses on one of these strategies: the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES). ASPES uses exogenous baseline data to identify endogenously-defined subgroups, keeping the…
Descriptors: Program Evaluation, Credibility, Prediction, Sample Size
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Dawson, Anneka; Yeomans, Emily; Brown, Elena Rosa – Educational Research, 2018
Background: The Education Endowment Foundation (EEF) is an independent charity that was established in 2011 with the explicit aim of breaking the link between family income and educational achievement in England. Over the seven years since its inception, EEF has contributed to the existing evidence base by funding over one hundred randomised…
Descriptors: Foreign Countries, Educational Research, Randomized Controlled Trials, Research Problems
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Paul Siciliano; Bethany Hornbeck; Sarah Hanks; Summer Kuhn; Alicia Zbehlik; Ann L. Chester – Journal of STEM Outreach, 2018
This paper explores the dynamics of a research partnership between a practicing clinician/research and 34 West Virginia high school students participating in a precollege STEM intervention program. The collaboration provided a more diverse study sample to the clinician for examining attitudes about knee osteoarthritis in adults over 40. It…
Descriptors: Partnerships in Education, Sample Size, Experiential Learning, STEM Education

Peer reviewed
Direct link
