NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
Showing 31 to 45 of 227 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Heather C. Hill; Anna Erickson – Educational Researcher, 2019
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Implementation, Program Effectiveness, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Norwich, Brahm; Koutsouris, George – International Journal of Research & Method in Education, 2020
This paper describes the context, processes and issues experienced over 5 years in which a RCT was carried out to evaluate a programme for children aged 7-8 who were struggling with their reading. Its specific aim is to illuminate questions about the design of complex teaching approaches and their evaluation using an RCT. This covers the early…
Descriptors: Randomized Controlled Trials, Program Evaluation, Reading Programs, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Henry May; Aly Blakeney – AERA Online Paper Repository, 2022
This paper presents evidence confirming the validity of the RD design in the Reading Recovery study by examining the ability of the RD design to replicate the 1st grade results observed in the original i3 RCT focused on short-term impacts. Over 1,800 schools participated in the RD study over all four cohort years. The RD design used cutoff-based…
Descriptors: Reading Programs, Reading Instruction, Cutting Scores, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Chow, Jason C.; Hampton, Lauren H. – Remedial and Special Education, 2019
Interventions often require multiple decisions to improve outcomes for every student. Whether the decision to implement a practice, tailor an existing protocol, or change approaches, these decisions should be based on individual variables and outcomes via a sequence of treatment. To develop adaptive interventions that have sufficient evidence to…
Descriptors: Special Education, Intervention, Program Development, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V.; Schauer, Jacob – Educational Research, 2018
Background and purpose: Studies of education and learning that were described as experiments have been carried out in the USA by educational psychologists since about 1900. In this paper, we discuss the history of randomised trials in education in the USA in terms of five historical periods. In each period, the use of randomised trials was…
Descriptors: Randomized Controlled Trials, Educational Research, Educational Psychology, Educational History
Yoon, HyeonJin – ProQuest LLC, 2018
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment…
Descriptors: Regression (Statistics), Program Evaluation, Computation, Statistical Analysis
Yoon, HyeonJin – Grantee Submission, 2018
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment…
Descriptors: Regression (Statistics), Program Evaluation, Computation, Statistical Analysis
Hedges, Larry V.; Schauer, Jacob – Grantee Submission, 2018
Background and purpose: Studies of education and learning that were described as experiments have been carried out in the USA by educational psychologists since about 1900. In this paper, we discuss the history of randomised trials in education in the USA in terms of five historical periods. In each period, the use of randomised trials was…
Descriptors: Randomized Controlled Trials, Educational Research, Educational Psychology, Educational History
Lo-Hua Yuan; Avi Feller; Luke W. Miratrix – Grantee Submission, 2019
Randomized trials are often conducted with separate randomizations across multiple sites such as schools, voting districts, or hospitals. These sites can differ in important ways, including the site's implementation, local conditions, and the composition of individuals. An important question in practice is whether--and under what…
Descriptors: Causal Models, Intervention, High School Students, College Attendance
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16