NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
Showing 1 to 15 of 179 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Julie Murray; Charlie Rioux; Sophie Parent; Jean R. Séguin; Michelle Pinsonneault; William D. Fraser; Natalie Castellanos-Ryan – Prevention Science, 2024
Parenting programs have been shown to be effective in preventing and reducing externalising problems in young children. Despite their efficacy, the low rate of initial parental engagement in these programs is a major challenge for clinicians and researchers. Few studies have examined factors associated with rates of initial engagement in parenting…
Descriptors: Parent Participation, Parent Education, Prevention, Child Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Genik, Lara M.; Aerts, Elisabeth L.; Nauman, Hiba; Barney, Chantel C.; Lewis, Stephen P.; McMurtry, C. Meghan – American Journal on Intellectual and Developmental Disabilities, 2021
Within a parallel-group randomized control trial, pain training's impact on Respite Workers' (RW) care approaches and training evaluations was explored. RW (n = 158) from 14 organizations received pain or control training following randomization. Researchers were blind until randomization; allocations were not shared explicitly with…
Descriptors: Randomized Controlled Trials, Pain, Caregiver Training, Respite Care
Peer reviewed Peer reviewed
Direct linkDirect link
Troyer, Margaret – Journal of Research in Reading, 2022
Background: Randomised controlled trials (RCTs) have long been considered the gold standard in education research. Federal funds are allocated to evaluations that meet What Works Clearinghouse standards; RCT designs are required in order to meet these standards without reservations. Schools seek out interventions that are research based, in other…
Descriptors: Educational Research, Randomized Controlled Trials, Adolescents, Reading Instruction
Weiss, Michael J.; Unterman, Rebecca; Biedzio, Dorota – MDRC, 2021
Some education programs' early positive effects disappear over time. Other programs have unanticipated positive long-term effects. Foundations warn of the dangers of putting too much weight on in-program effects, which, they say, often fade after a program ends. This Issue Focus tackles the topic of post-program effects in postsecondary education.…
Descriptors: Outcomes of Education, Higher Education, College Credits, Program Evaluation
Maynard, Rebecca A.; Baelen, Rebecca N.; Fein, David; Souvanna, Phomdaen – Grantee Submission, 2022
This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices.…
Descriptors: Program Improvement, Evaluation Methods, Efficiency, Young Adults
Ross, Stephen L.; Brunner, Eric; Rosen, Rachel – Grantee Submission, 2020
This paper considers recent efforts to conduct experimental and quasi-experimental evaluations of career and technical education programs. It focuses on understanding the counterfactual, or control population, for these program evaluations, discussing how the educational experiences of the control population might vary from those of the treated…
Descriptors: Vocational Education, Program Evaluation, Educational Experience, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fein, David; Maynard, Rebecca A. – Grantee Submission, 2022
In 2015, Abt Associates received a grant from the Institutes for Education Sciences (IES) for a five-year "Development and Innovation" study of PTC. The purposes of the study were to gauge progress in implementing PTC and to develop and test improvements where needed. Fein et al. (2020) summarize the IES study's approach and findings. A…
Descriptors: Program Evaluation, Program Implementation, Program Improvement, College Students
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Shrubsole, Kirstine; Rogers, Kris; Power, Emma – International Journal of Language & Communication Disorders, 2022
Background: While implementation studies in aphasia management have shown promising improvements to clinical practice, it is currently unknown if aphasia implementation outcomes are sustained and what factors may influence clinical sustainability. Aims: To evaluate the sustainment (i.e., sustained improvement of aphasia management practices and…
Descriptors: Speech Language Pathology, Allied Health Personnel, Aphasia, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
There are a growing number of large-scale educational randomized controlled trials (RCTs). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the UK-based Education Endowment Foundation and the U.S.-based…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Larry L. Orr; Robert B. Olsen; Stephen H. Bell; Ian Schmid; Azim Shivji; Elizabeth A. Stuart – Journal of Policy Analysis and Management, 2019
Evidence-based policy at the local level requires predicting the impact of an intervention to inform whether it should be adopted. Increasingly, local policymakers have access to published research evaluating the effectiveness of policy interventions from national research clearinghouses that review and disseminate evidence from program…
Descriptors: Educational Policy, Evidence Based Practice, Intervention, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Henry May; Aly Blakeney – AERA Online Paper Repository, 2022
This paper presents evidence confirming the validity of the RD design in the Reading Recovery study by examining the ability of the RD design to replicate the 1st grade results observed in the original i3 RCT focused on short-term impacts. Over 1,800 schools participated in the RD study over all four cohort years. The RD design used cutoff-based…
Descriptors: Reading Programs, Reading Instruction, Cutting Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12