NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Jeremiah; Hall, Jori; Greene, Jennifer C.; Ahn, Jeehae – American Journal of Evaluation, 2013
Evaluators have an obligation to present clearly the results of their evaluative efforts. Traditionally, such presentations showcase formal written and oral reports, with dispassionate language and graphs, tables, quotes, and vignettes. These traditional forms do not reach all audiences nor are they likely to include the most powerful presentation…
Descriptors: Evaluation Research, Change Strategies, Research Reports, Usability
Peer reviewed Peer reviewed
Direct linkDirect link
Greene, Jennifer C. – Evaluation and Program Planning, 2010
This discussion foregrounds four key issues engaged by the articles presented in this special issue: the unique challenges and opportunities of environmental education evaluation, how to think well about the evaluation approaches and purposes that best match this domain, evaluation capacity building in environmental education and action, and…
Descriptors: Environmental Education, Public Policy, Accountability, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Benjamin, Lehn M.; Greene, Jennifer C. – American Journal of Evaluation, 2009
Today's public policy discussions increasingly focus on how "networks" of public and private actors collaborate across organizational, sectoral, and geographical boundaries to solve increasingly complex problems. Yet, many of evaluation's key concepts, including the evaluator's role, assume an evaluand that is programmatically or organizationally…
Descriptors: Evaluators, Public Policy, Networks, Problem Solving
Peer reviewed Peer reviewed
Costantino, Tracie E.; Greene, Jennifer C. – American Journal of Evaluation, 2003
Describes the evaluation of an intergenerational storytelling program, offering reflections on the evaluation of practice from an initial evaluation approach of interpretive responses to an ending framework of narrative for the evaluation. (SLD)
Descriptors: Evaluation Methods, Intergenerational Programs, Personal Narratives, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Greene, Jennifer C.; Lipsey, Mark W.; Schwandt, Thomas A.; Smith, Nick L.; Tharp, Roland G. – New Directions for Evaluation, 2007
Productive dialogue is informed best by multiple and diverse voices. Five seasoned evaluators, representing a range of evaluation perspectives, offer their views in two- to three-page discussant contributions. These individuals were asked to reflect and comment on the previous chapters in the spirit of critical review as a key source of evidence…
Descriptors: Evaluators, Evaluation Methods, Research Design, Inquiry
Peer reviewed Peer reviewed
Direct linkDirect link
Alkin, Marvin C.; Christie, Christina A.; Greene, Jennifer C.; Henry, Gary T.; Donaldson, Stewart I.; King, Jean A. – New Directions for Evaluation, 2005
The editors give each of the theorists a chance to respond to questions posed about the context of the situation in relation to their own experience in the field, exploring how the exercise had an impact on their evaluation designs.
Descriptors: Program Evaluation, Evaluation Methods, Context Effect, Evaluators
Peer reviewed Peer reviewed
Greene, Jennifer C.; Caracelli, Valerie J. – New Directions for Evaluation, 1997
Three primary stances on the wisdom of mixing evaluation models while mixing evaluation methods frame the challenges to defensible mixed-method evaluative inquiry. These challenges are addressed by shifting the mixed-method controversy from models toward other critical features of disparate traditions of inquiry. (Author/SLD)
Descriptors: Definitions, Evaluation Methods, Models, Program Evaluation
Peer reviewed Peer reviewed
Greene, Jennifer C. – Evaluation and Program Planning, 1988
Linkages between utilization and stakeholder participation in the evaluation process are discussed, with emphasis on the communication of results. Data from two small-scale participatory evaluations are used to support the argument that key elements of the participatory process can be linked to meaningful and multiple forms of research…
Descriptors: Evaluation Utilization, Program Evaluation, Research Utilization, Use Studies
Peer reviewed Peer reviewed
Greene, Jennifer C. – New Directions for Evaluation, 2000
Reflects on an evaluation that aspired to be inclusive but generally failed to provide a backdrop for a discussion of inclusive evaluation. Identifies issues of absence of significant stakeholders, making of values by method, and the limited authority of the evaluation. Shows how easily deliberative intentions are distorted. (SLD)
Descriptors: Democracy, Evaluation Methods, Evaluation Problems, Program Evaluation
Peer reviewed Peer reviewed
Caracelli, Valerie J.; Greene, Jennifer C. – New Directions for Evaluation, 1997
Two broad classes of mixed-method designs--component and integrated--that have the potential to combine elements of different inquiry traditions are described. The conceptual ideas advanced in the first chapter are illustrated through selected examples of several mixed-method integrated models. (Author/SLD)
Descriptors: Case Studies, Evaluation Methods, Models, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Greene, Jennifer C.; DeStefano, Lizanne; Burgon, Holli; Hall, Jori – New Directions for Evaluation, 2006
There is concern that the nation's schools are not preparing students to excel in science, technology, engineering, and mathematics (STEM) fields. At both precollege and postsecondary levels, much effort is needed to create and implement powerful STEM curricula, prepare and support highly qualified teachers, deliver effective instruction, and give…
Descriptors: Educational Quality, Program Effectiveness, Program Improvement, Educational Experience
Peer reviewed Peer reviewed
Greene, Jennifer C. – American Journal of Evaluation, 2003
Discusses Margaret Mead's evaluation of the first Salzburg Seminar (1947), a forum for international exchange of intellectual ideas and cultural customs. Concludes that Mead's evaluation was highly successful in that it created an evocative representation of the seminar that told readers what it was like for its participants. (SLD)
Descriptors: Evaluation Methods, International Education, Personal Narratives, Program Evaluation
Greene, Jennifer C. – 1985
The naturalistic research perspective assumes that reality is multiplistic, phenomenological, and context-dependent. This perspective legitimizes the subjective insights of the investigator by acknowledging the interdependence of facts and values as well as of the investigator and the object of investigation. Although discrepancies between…
Descriptors: Case Studies, Data Analysis, Guidelines, Program Evaluation
Greene, Jennifer C. – 1986
Issues associated with involvement of social program participants in program evaluation are analyzed. Although there seems to be a broad consensus of the need for participatory evaluation, the proposed rationales and concomitant benefits to evaluation practice appear to be quite diverse. Key issues include the rationale for "stakeholder"…
Descriptors: Participation, Participative Decision Making, Policy Formation, Program Evaluation
Peer reviewed Peer reviewed
Greene, Jennifer C. – Evaluation Practice, 1997
Advances the argument that advocacy in evaluation is inevitable when advocacy is understood as a value commitment to a particular representative ideal. The regulative ideal advanced in this article is a commitment to democratic pluralism. Three case examples illustrate these ideas. (SLD)
Descriptors: Advocacy, Case Studies, Democracy, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2