NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 56 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2024
In the current socio-political climate, there is an extra urgency to evaluate whether program impacts are distributed fairly across important student groups in education. Both experimental and quasi-experimental designs (QEDs) can contribute to answering this question. This work demonstrates that QEDs that compare outcomes across higher-level…
Descriptors: Students, Program Evaluation, Social Development, Social Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Katherine Pye; Hannah Jackson; Teresa Iacono; Alan Shiell – Journal of Autism and Developmental Disorders, 2024
Many autistic children access some form of early intervention, but little is known about the value for money of different programs. We completed a scoping review of full economic evaluations of early interventions for autistic children and/or their families. We identified nine studies and reviewed their methods and quality. Most studies involved…
Descriptors: Economics, Early Intervention, Autism Spectrum Disorders, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Suzanne Nobrega; Kasper Edwards; Mazen El Ghaziri; Lauren Giacobbe; Serena Rice; Laura Punnett – American Journal of Evaluation, 2024
Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are…
Descriptors: STEM Education, Gender Differences, Intervention, Program Evaluation
Maynard, Rebecca A.; Baelen, Rebecca N.; Fein, David; Souvanna, Phomdaen – Grantee Submission, 2022
This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices.…
Descriptors: Program Improvement, Evaluation Methods, Efficiency, Young Adults
Sadro, Fay; Marsh, Lucy; Klenk, Hazel; Egglestone, Corin; Friel, Seana – Learning and Work Institute, 2021
The future of work is changing, providing opportunities for new careers and novel ways of working. Changes to the world of work will be driven by long-term trends such as technological and demographic change as well as the ongoing impact of the COVID-19 pandemic. Future skills requirements will increasingly emphasise interpersonal, higher-order…
Descriptors: Adult Learning, Career Change, Electronic Learning, Job Skills
Kenneth M. Zeichner, Editor; Linda Darling-Hammond, Editor; Amy I. Berman, Editor; Dian Dong, Editor; Gary Sykes, Editor – National Academy of Education, 2025
The "Evaluating and Improving Teacher Preparation Programs" consensus report provides critical, evidence-based recommendations for teacher preparation program (TPP) evaluation and improvement and the systemic changes necessary to improve teaching as a profession. The report documents the extensive research that supports four groups of…
Descriptors: Teacher Education Programs, Program Evaluation, Program Improvement, Program Design
Peer reviewed Peer reviewed
Direct linkDirect link
Brady, Eavan; Holt, Stephanie; Whelan, Sadhbh – Child Care in Practice, 2018
This article provides the reader with an overview of the literature relating to family support services while highlighting a number of key issues for consideration in evaluating these diverse and complex interventions. Drawing on a case example of a family support service evaluation carried out in Ireland in 2014 by the authors, this article will…
Descriptors: Family Programs, Program Evaluation, Intervention, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Richardson, Emma Z. L.; Phillips, Mary; Colom, Alejandra; Nichols, Jennica – American Journal of Evaluation, 2019
Program participants have been largely excluded as an evidence source in realist evaluations. We test whether and how lived experience as described through life history interviews with pilot program participants can be used as a valid and unique source of data for elucidating context (C)--mechanism (M)--outcome (O) configurations and informing…
Descriptors: Experience, Autobiographies, Personal Narratives, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Richmond, Melissa K.; Pampel, Fred C.; Zarcula, Flavia; Howey, Virginia; McChesney, Brenda – Research on Social Work Practice, 2017
Purpose: Family support programs commonly use self-sufficiency matrices (SSMs) to measure family outcomes, however, validation research on SSMs is sparse. This study examined the reliability of the Colorado Family Support Assessment 2.0 (CFSA 2.0) to measure family self-reliance across 14 domains (e.g., employment). Methods: Ten written case…
Descriptors: Reliability, Family Programs, Case Studies, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dunst, Carl J.; Annas, Kimberly; Wilkie, Helen; Hamby, Deborah W. – World Journal of Education, 2019
A review of 25 technical assistance models and frameworks was conducted to identify the core elements of technical assistance practices. The focus of analysis was on generally agreed upon technical assistance practices that were considered essential for planning, implementing and evaluating the effectiveness of technical assistance. Results…
Descriptors: Technical Assistance, Evaluation Methods, Models, Program Development
Peer reviewed Peer reviewed
Direct linkDirect link
Downes, Amia; Novicki, Emily; Howard, John – American Journal of Evaluation, 2019
Interest from Congress, executive branch leadership, and various other stakeholders for greater accountability in government continues to gain momentum today with government-wide efforts. However, measuring the impact of research programs has proven particularly difficult. Cause and effect linkages between research findings and changes to…
Descriptors: Program Evaluation, Evaluators, Research Projects, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Bastian, Kevin C.; Patterson, Kristina M.; Pan, Yi – Journal of Teacher Education, 2018
States are incorporating evaluation ratings into new, multioutcome teacher preparation program (TPP) evaluation systems, yet little is known about the relationships between TPPs and the evaluation ratings of program graduates. To address this gap, we use teachers' ratings on the North Carolina Educator Evaluation System to determine whether TPPs…
Descriptors: Teacher Education Programs, Program Evaluation, Teacher Attitudes, Teacher Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Paz-Ybarnegaray, Rodrigo; Douthwaite, Boru – American Journal of Evaluation, 2017
This article describes the development and use of a rapid evaluation approach to meet program accountability and learning requirements in a research for development program operating in five developing countries. The method identifies clusters of outcomes, both expected and unexpected, happening within areas of change. In a workshop, change agents…
Descriptors: Evaluation Methods, Program Evaluation, Accountability, Developing Nations
Peer reviewed Peer reviewed
Direct linkDirect link
Jayaratne, K. S. U. – Journal of Extension, 2015
Extension educators have been challenged to be cost effective in their educational programming. The cost effectiveness ratio is a versatile evaluation indicator for Extension educators to compare the cost of achieving a unit of outcomes or educating a client in similar educational programs. This article describes the cost effectiveness ratio and…
Descriptors: Extension Education, Cost Effectiveness, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Connolly, John; Reid, Garth; Mooney, Allan – Teaching Public Administration, 2015
It is necessary for public managers to be able to evaluate programmes in the context of complexity. This article offers key learning and reflections based on the experience of facilitating the evaluation of complexity with a range of public sector partners in Scotland. There have been several articles that consider evaluating complexity and…
Descriptors: Foreign Countries, Public Sector, Program Evaluation, Health Services
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4