NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alyson Collins; Stephen Ciullo; Steve Graham; Joong won Lee – Society for Research on Educational Effectiveness, 2024
Background/Context: Many students in the United States need access to effective writing instruction, with some requiring intensive intervention. Results from the National Assessment of Education Progress (NAEP) in writing over the past 25 years revealed that most students in the U.S. have yet to attain a proficient level of written expression…
Descriptors: Program Effectiveness, Writing Instruction, Elementary School Students, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Dynarski, Mark – Regional Educational Laboratory, 2016
This brief provides tips writers can use to make impact research more digestible and actionable for policymakers and practitioners. The brief emphasizes five tips: make the contrast clear, make causal statements only when they result from causal research designs, present numbers simply and concretely, describe effects in meaningful units, and…
Descriptors: Research, Writing Strategies, Research Reports, Program Effectiveness
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Sheridan, Susan M.; Clarke, Brandy L.; Knoche, Lisa L.; Edwards, Carolyn Pope – Early Education and Development, 2006
Conjoint behavioral consultation (CBC) is an ecological model of service delivery that brings together parents and educators to collaboratively address shared concerns for a child. This study provides exploratory data investigating the effects of CBC on home and school concerns for 48 children aged 6 and younger. Single-subject methods were used…
Descriptors: Effect Size, Consultation Programs, Teamwork, Delivery Systems
Lynch, Kathleen Bodisch – 1987
Current practice in educational program evaluation was examined through analyses of 232 reports submitted, from 1980 to 1983, by institutions seeking approval for their programs from the U.S. Department of Education's Joint Dissemination Review Panel (JDRP). The JDRP reviews these reports to determine whether educational programs have demonstrated…
Descriptors: Educational Practices, Effect Size, Evaluation Criteria, Evaluation Methods
Lynch, Kathleen Bodisch – 1986
Educational programs and evaluations which were submitted to the Department of Education's Joint Dissemination Review Panel (JDRP), in order to be named validated programs, were studied to identify program characteristics associated with large versus small effect size. Effect size was calculated for 165 out of 232 submittals reviewed by JDRP from…
Descriptors: Effect Size, Elementary Secondary Education, Evaluation Criteria, Evaluation Methods
van der Ploeg, Arie J.; And Others – 1986
The effectiveness of an Education Consolidation Improvement Act Chapter 1 program was studied in a large urban school district. A common problem in evaluating remedial programs is that pretest/posttest achievement gains often indicate the effects of the entire curriculum, rather than those specific to the remedial class. In this district, over…
Descriptors: Core Curriculum, Effect Size, Elementary Education, Evaluation Methods
Madhere, Serge – 1986
One of the most appropriate quasiexperimental approaches to compensatory education is the regression-discontinuity design. However, it remains underutilized, in part because of the need to clarify the link between the mathematical model and administrative decision-making. This paper explains the derivation of a program efficiency index congruent…
Descriptors: Compensatory Education, Cutting Scores, Effect Size, Elementary Education