NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)3
Since 2017 (last 10 years)14
Since 2007 (last 20 years)51
Laws, Policies, & Programs
No Child Left Behind Act 20011
Showing 1 to 15 of 51 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca Walcott; Isabelle Cohen; Denise Ferris – Evaluation Review, 2024
When and how to survey potential respondents is often determined by budgetary and external constraints, but choice of survey modality may have enormous implications for data quality. Different survey modalities may be differentially susceptible to measurement error attributable to interviewer assignment, known as interviewer effects. In this…
Descriptors: Surveys, Research Methodology, Error of Measurement, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Grantee Submission, 2022
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Rebecca; Morrison, Jennifer; Inns, Amanda; Slavin, Robert; Risman, Kelsey – Journal of Research on Educational Effectiveness, 2020
Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties.…
Descriptors: Program Evaluation, Program Effectiveness, Effect Size, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Peck, Laura R. – American Journal of Evaluation, 2015
Several analytic strategies exist for opening up the "black box" to reveal more about what drives policy and program impacts. This article focuses on one of these strategies: the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES). ASPES uses exogenous baseline data to identify endogenously-defined subgroups, keeping the…
Descriptors: Program Evaluation, Credibility, Prediction, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Dawson, Anneka; Yeomans, Emily; Brown, Elena Rosa – Educational Research, 2018
Background: The Education Endowment Foundation (EEF) is an independent charity that was established in 2011 with the explicit aim of breaking the link between family income and educational achievement in England. Over the seven years since its inception, EEF has contributed to the existing evidence base by funding over one hundred randomised…
Descriptors: Foreign Countries, Educational Research, Randomized Controlled Trials, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul Siciliano; Bethany Hornbeck; Sarah Hanks; Summer Kuhn; Alicia Zbehlik; Ann L. Chester – Journal of STEM Outreach, 2018
This paper explores the dynamics of a research partnership between a practicing clinician/research and 34 West Virginia high school students participating in a precollege STEM intervention program. The collaboration provided a more diverse study sample to the clinician for examining attitudes about knee osteoarthritis in adults over 40. It…
Descriptors: Partnerships in Education, Sample Size, Experiential Learning, STEM Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4