Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 36 |
Descriptor
Source
American Journal of Evaluation | 47 |
Author
Campbell, Rebecca | 3 |
Azzam, Tarek | 2 |
Fehler-Cabral, Giannina | 2 |
Penuel, William R. | 2 |
Adams, Adrienne E. | 1 |
Alexander, Neil | 1 |
Atkinson, Donna Durant | 1 |
Avula, Deepa | 1 |
Bamberger, Michael | 1 |
Bartlett, Susan | 1 |
Becker, Les | 1 |
More ▼ |
Publication Type
Journal Articles | 47 |
Reports - Research | 17 |
Reports - Descriptive | 15 |
Reports - Evaluative | 14 |
Book/Product Reviews | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 3 |
Elementary Secondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 8 | 1 |
Postsecondary Education | 1 |
Preschool Education | 1 |
Audience
Researchers | 1 |
Teachers | 1 |
Location
Afghanistan | 1 |
Alaska | 1 |
Denmark | 1 |
Guinea | 1 |
Kenya | 1 |
Michigan (Detroit) | 1 |
Nepal | 1 |
North Carolina | 1 |
Oregon | 1 |
Pennsylvania | 1 |
Philippines | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Campbell, Rebecca; Goodman-Williams, Rachael; Feeney, Hannah; Fehler-Cabral, Giannina – American Journal of Evaluation, 2020
The purpose of this study was to develop triangulation coding methods for a large-scale action research and evaluation project and to examine how practitioners and policy makers interpreted both convergent and divergent data. We created a color-coded system that evaluated the extent of triangulation across methodologies (qualitative and…
Descriptors: Mixed Methods Research, Action Research, Data Interpretation, Coding
de Alteriis, Martin – American Journal of Evaluation, 2020
This article examines factors that could have influenced whether evaluations of U.S. government--funded foreign assistance programs completed in 2015 had considered unintended consequences. Logit regression models indicate that the odds of considering unintended consequences were increased when all or most of seven standard data collection methods…
Descriptors: Federal Programs, International Programs, Program Evaluation, Influences
Hilton, Lara G.; Azzam, Tarek – American Journal of Evaluation, 2019
Evaluations that include stakeholders aim to understand their perspectives and to ensure that their views are represented. This article offers a new approach to gaining stakeholder perspectives through crowdsourcing. We recruited a sample of individuals with chronic low back pain through a crowdsourcing site. This sample coded textual data…
Descriptors: Qualitative Research, Stakeholders, Data Collection, Chronic Illness
Pattyn, Valérie; Molenveld, Astrid; Befani, Barbara – American Journal of Evaluation, 2019
Qualitative comparative analysis (QCA) is gaining ground in evaluation circles, but the number of applications is still limited. In this article, we consider the challenges that can emerge during a QCA evaluation by drawing on our experience of conducting one in the field of development cooperation. For each stage of the evaluation process, we…
Descriptors: Qualitative Research, Comparative Analysis, Evaluation Methods, Program Evaluation
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Stelmach, Rachel D.; Fitch, Elizabeth; Chen, Molly; Meekins, Meagan; Flueckiger, Rebecca M.; Colaço, Rajeev – American Journal of Evaluation, 2022
Monitoring, evaluation, and research activities generate important data, but they often fail to change policies or programs. In addition, local program staff and partners often feel disconnected from these activities, which undermines their ownership of data and results. To bridge the gaps between monitoring, evaluation, and research and to give…
Descriptors: Evidence Based Practice, Evaluation, Research, Global Approach
Morell, Jonathan A. – American Journal of Evaluation, 2019
Project schedules are logic models that focus on the timing of program activities. Value derives from the fact that schedule changes are not random. Why they occur, and how long they last, can reveal information that would not be easily revealed with other approaches to evaluation. Also, using project schedules as logic models forges a strong link…
Descriptors: Scheduling, Program Administration, Models, Logical Thinking
Cueva, Katie; Fenaughty, Andrea; Liendo, Jessica Aulasa; Hyde-Rolland, Samantha – American Journal of Evaluation, 2020
Chronic diseases with behavioral risk factors are now the leading causes of death in the United States. A national Behavioral Risk Factor Surveillance System (BRFSS) monitors those risk factors; however, there is a need for national and state evaluations of chronic disease surveillance systems. The Department of Health and Human Services/Centers…
Descriptors: Chronic Illness, At Risk Persons, Program Evaluation, Evaluation Methods
Jacobson, Miriam R.; Whyte, Cristina E.; Azzam, Tarek – American Journal of Evaluation, 2018
Evaluators can work with brief units of text-based data, such as open-ended survey responses, text messages, and social media postings. Online crowdsourcing is a promising method for quantifying large amounts of text-based data by engaging hundreds of people to categorize the data. To further develop and test this method, individuals were…
Descriptors: Mixed Methods Research, Evaluation Methods, Comparative Analysis, Feedback (Response)
Groth Andersson, Signe; Denvall, Verner – American Journal of Evaluation, 2017
In recent years, performance management (PM) has become a buzzword in public sector organizations. Well-functioning PM systems rely on valid performance data, but critics point out that conflicting rationale or logic among professional staff in recording information can undermine the quality of the data. Based on a case study of social service…
Descriptors: Performance, Social Services, Case Studies, Data Collection
Brandon, Paul R.; Fukunaga, Landry L. – American Journal of Evaluation, 2014
Evaluators widely agree that stakeholder involvement is a central aspect of effective program evaluation. With the exception of articles on collaborative evaluation approaches, however, a systematic review of the breadth and depth of the literature on stakeholder involvement has not been published. In this study, we examine peer-reviewed empirical…
Descriptors: Stakeholders, Research, Data Collection, Observation
Granger, Robert C.; Maynard, Rebecca – American Journal of Evaluation, 2015
Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the "what works" approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more…
Descriptors: Outcome Measures, Program Evaluation, Evaluation Utilization, Policy
Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan – American Journal of Evaluation, 2015
While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…
Descriptors: Regression (Statistics), Evaluation Methods, Evaluation Problems, Probability
Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M. – American Journal of Evaluation, 2014
Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…
Descriptors: Evaluation Methods, Program Implementation, Failure, Program Effectiveness
Hall, Jori N.; Freeman, Melissa – American Journal of Evaluation, 2014
Shadowing is a data collection method that involves following a person, as they carry out those everyday activities relevant to a research study. This article explores the use of shadowing in a formative evaluation of a professional development school (PDS). Specifically, this article discusses how shadowing was used to understand the role of a…
Descriptors: Formative Evaluation, Capacity Building, Professional Development Schools, Data Collection