NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Marc T. Braverman – Journal of Human Sciences & Extension, 2019
This article examines the concept of credible evidence in Extension evaluations with specific attention to the measures and measurement strategies used to collect and create data. Credibility depends on multiple factors, including data quality and methodological rigor, characteristics of the stakeholder audience, stakeholder beliefs about the…
Descriptors: Extension Education, Program Evaluation, Evaluation Methods, Planning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kenneth R. Jones; Eugenia P. Gwynn; Allison M. Teeter – Journal of Human Sciences & Extension, 2019
This article provides insight into how an adequate approach to selecting methods can establish credible and actionable evidence. The authors offer strategies to effectively support Extension professionals, including program developers and evaluators, in being more deliberate when selecting appropriate qualitative and quantitative methods. In…
Descriptors: Evaluation Methods, Credibility, Evidence, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Chazdon, Scott; Meyer, Nathan; Mohr, Caryn; Troschinetz, Alexis – Journal of Extension, 2017
The public value poster session is a new tool for effectively demonstrating and reporting the public value of Extension programming. Akin to the research posters that have long played a critical role in the sharing of findings from academic studies, the public value poster provides a consistent format for conveying the benefits to society of…
Descriptors: Extension Education, Program Evaluation, Educational Benefits, Capacity Building
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chelsea Hetherington; Cheryl Eschbach; Courtney Cuthbertson – Journal of Human Sciences & Extension, 2019
Evaluation capacity building (ECB) is an essential element for generating credible and actionable evidence on Extension programs. This paper offers a discussion of ECB efforts in Cooperative Extension and how such efforts enable Extension professionals to collect and use credible and actionable evidence on the quality and impacts of programs.…
Descriptors: Cooperative Education, Extension Education, Capacity Building, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Schuwirth, Lambert W. T.; Van Der Vleuten, Cees P. M. – Journal of Applied Testing Technology, 2019
Programmatic assessment is both a philosophy and a method for assessment. It has been developed in medical education as a response to the limitation of the dominant testing or measurement approaches and to better align with changes in how medical competence was conceptualised. It is based on continual collection of assessment and feedback…
Descriptors: Program Evaluation, Medical Education, Competency Based Education, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, James R. – American Journal of Evaluation, 2015
Program evaluation is generally viewed as a set of mechanisms for collecting and using information to learn about projects, policies and programs, to understand their effects as well as the manner in which they are implemented. AEA has espoused principles for evaluation that place emphasis on competent, honest inquiry that respects the security,…
Descriptors: Program Evaluation, Social Justice, Community Change, Change Strategies
Peer reviewed Peer reviewed
Felix, Joseph L. – Educational Evaluation and Policy Analysis, 1979
Evaluation procedures and programs in the Cincinnati, Ohio school system, the role of local school evaluators, and the models for school evaluation--based on high, moderate, or low trust--are described. Evaluators serve local schools in formative and summative evaluation projects, in assessing needs, and in meeting them. (MH)
Descriptors: Credibility, Evaluation Methods, Evaluation Needs, Evaluators
Peer reviewed Peer reviewed
Lowry, Robert C.; Silver, Brian D. – PS: Political Science and Politics, 1996
Asserts that variance between a university's reputation as an institution and its commitment to research have a greater impact on political science department rankings than any internal factors within the department. Includes several tables showing statistical variables of department and university rankings. (MJP)
Descriptors: Academic Education, Achievement Rating, Analysis of Variance, Credibility