NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
Dynamic Indicators of Basic…1
Showing 1 to 15 of 69 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Meline, McKenzie; Harn, Beth; Jamgochian, Elisa; Stickland-Cohen, M. Kathleen; Linan-Thompson, Sylvia; Lucero, Audrey – Journal of Special Education, 2023
The purpose of this meta-analysis was to examine the literature base of single-case research design studies using video analysis to determine its effectiveness on teacher outcomes. Primary, ancestral, citation, and first author searches identified 12,047 dissertations and peer-reviewed articles published from 2010 to 2020. Each study (n = 24) was…
Descriptors: Video Technology, Program Effectiveness, Teacher Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Alexandra; Weisweiler, Silke; Frey, Dieter – Studies in Higher Education, 2020
Although the high costs of implementing personnel development programs in enterprises and increasingly in universities as well are commonly accepted, the scientifically-grounded evaluation of a program's effectiveness is often neglected. The aim of this paper was to evaluate a personnel development program for academics at a German university…
Descriptors: Faculty Development, College Faculty, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This report presents findings from exploratory, descriptive meta-analyses of effect sizes reported by the first 82 EEF evaluations that used a randomised controlled trial (RCT) or clustered RCT impact evaluation design published up to January 2019. The review used a theoretical framework derived from literature with five overarching themes to…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This document summarises key findings from the quantitative strands of a review of the Education Endowment Foundation (EEF) evaluations that had reported from the establishment of EEF in 2011 up to January 2019. The quantitative strands summarised include meta-analyses of effect sizes reported for attainment outcomes and descriptive analyses of…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demack, Sean; Maxwell, Bronwen; Coldwell, Mike; Stevens, Anna; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
This document provides supplementary statistical tables to support the review of Education Endowment Foundation (EEF) evaluations. This includes: (1) Descriptive (univariate) tables for all explanatory variables; (2) Tables for the meta-analyses of primary ITT effect sizes; (3) Tables for the meta-analyses of secondary ITT effect sizes; (4) Tables…
Descriptors: Foreign Countries, Disadvantaged Youth, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stern, Jonathan M. B.; Piper, Benjamin – RTI International, 2019
This paper uses recent evidence from international early grade reading programs to provide guidance about how best to create appropriate targets and more effectively identify improved program outcomes. Recent results show that World Bank and US Agency for International Development-funded large-scale international education interventions in low-…
Descriptors: Early Childhood Education, Elementary School Students, Reading Programs, Program Design
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Talan, Tarik; Batdi, Veli – Turkish Online Journal of Distance Education, 2020
This study was carried out to determine the effectiveness of the Flipped Classroom Model (FCM) in an educational setting. For this purpose, a multi-complementary approach (MCA) was used including both quantitative (meta-analysis) and qualitative (thematic). MCA consists of three parts, the first of which is the pre-complementary information stage.…
Descriptors: Foreign Countries, Blended Learning, Instructional Effectiveness, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen; Siddiqui, Nadia; See, Beng Huat – Educational Studies, 2016
This paper describes a randomised controlled trial conducted with 10 secondary schools in England to evaluate the impact and feasibility of Fresh Start as an intervention to help new entrants with low prior literacy. Fresh Start is a synthetic phonics programme for small groups of pupils, here implemented three times per week over 22 weeks. The…
Descriptors: Foreign Countries, Reading Programs, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Busse, R. T.; McGill, Ryan J.; Kennedy, Kelly S. – Contemporary School Psychology, 2015
The purpose of this article is to present various single-case outcome assessment methods for evaluating school-based intervention effectiveness. We present several outcome methods, including goal attainment scaling, visual analysis, trend analysis, percentage of non-overlapping data, single-case mean difference effect size, reliable change index,…
Descriptors: Evaluation Methods, Intervention, Outcome Measures, Program Effectiveness
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5