Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 38 |
Descriptor
Source
Author
| Zhentian, Liu | 2 |
| Aiona, Shelli | 1 |
| Alkin, Marvin C. | 1 |
| Anton, Jennifer | 1 |
| Athanasou, James A. | 1 |
| Baldwin, Tamara | 1 |
| Barajas, Clara B. | 1 |
| Bartram, Dave | 1 |
| Blattner, Nancy | 1 |
| Blendinger, Jack | 1 |
| Bruner, Charles | 1 |
| More ▼ | |
Publication Type
| Reports - Descriptive | 54 |
| Journal Articles | 47 |
| Opinion Papers | 2 |
| Information Analyses | 1 |
Education Level
| Adult Education | 15 |
| Higher Education | 15 |
| Elementary Secondary Education | 10 |
| Early Childhood Education | 3 |
| Postsecondary Education | 2 |
| Elementary Education | 1 |
| Two Year Colleges | 1 |
Audience
| Practitioners | 2 |
| Researchers | 2 |
| Teachers | 2 |
| Policymakers | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
| Goals 2000 | 1 |
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Mark, Melvin M. – American Journal of Evaluation, 2022
Premised on the idea that evaluators should be familiar with a range of approaches to program modifications, I review several existing approaches and then describe another, less well-recognized option. In this newer option, evaluators work with others to identify potentially needed adaptations for select program aspects "in advance." In…
Descriptors: Evaluation Research, Evaluation Problems, Evaluation Methods, Models
Ellington, Roni; Barajas, Clara B.; Drahota, Amy; Meghea, Cristian; Uphold, Heatherlun; Scott, Jamil B.; Lewis, E. Yvonne; Furr-Holden, C. Debra – American Journal of Evaluation, 2022
Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the…
Descriptors: Evaluation Research, Evaluation Problems, Evaluation Methods, Program Evaluation
Alkin, Marvin C.; King, Jean A. – American Journal of Evaluation, 2017
The second article in this series on the history of evaluation use has three sections. The first and longest develops a functional definition of the term "use," noting that a thorough definition of evaluation use includes the initial stimulus (i.e., evaluation findings or process), the user, the way people use the information, the aspect…
Descriptors: Definitions, Users (Information), Ethics, Evaluation
Killion, Joellen – Learning Professional, 2017
Evaluation of professional learning illuminates the interactions that occur in the implementation of planned learning experiences and the necessary supports designed to improve professional practice and its effects on students. It investigates how a set of actions designed to achieve defined short- and long-term outcomes occur over time and how…
Descriptors: Evaluation Problems, Evaluation Research, Professional Education, Barriers
Spurgeon, Shawn L. – Measurement and Evaluation in Counseling and Development, 2017
Construct irrelevance (CI) and construct underrepresentation (CU) are 2 major threats to validity, yet they are rarely discussed within the counseling literature. This article provides information about the relevance of these threats to internal validity. An illustrative case example will be provided to assist counselors in understanding these…
Descriptors: Construct Validity, Evaluation Criteria, Evaluation Methods, Evaluation Problems
Huang, Xiaoping; Hu, Zhongfeng – Higher Education Studies, 2015
The main problem of the educational evaluation validity is that it just copies the conceptual framework system of validity from educational measurement to its own conceptual system. The validity conceptual system that fits the need of theory and practice of educational evaluation has not been established yet. According to the inherent attributive…
Descriptors: Test Validity, Educational Assessment, Evaluation Problems, Theory Practice Relationship
Stumpf, Dan; King, Stephanie; Blendinger, Jack; Davis, Ed – Community College Journal of Research and Practice, 2013
Because the process of faculty evaluation in the community college gives rise to ethical concerns about what is evaluated, who is involved in the process, and how data are collected and used, the purpose of this paper is to provide a meaningful ethical perspective for conducting faculty evaluation. The authors discuss ethical issues that arise in…
Descriptors: Ethics, Community Colleges, Teacher Evaluation, Evaluation Problems
Chouinard, Jill Anne – American Journal of Evaluation, 2013
Evaluation occurs within a specific context and is influenced by the economic, political, historical, and social forces that shape that context. The culture of evaluation is thus very much embedded in the culture of accountability that currently prevails in public sector institutions, policies, and program. As such, our understanding of the…
Descriptors: Accountability, Public Sector, Participatory Research, Context Effect
TNTP, 2011
This paper presents myths as well as facts about value-added analysis. These myths include: (1) "Value-added isn't fair to teachers who work in high-need schools, where students tend to lag far behind academically"; (2) "Value-added scores are too volatile from year-to-year to be trusted"; (3) "There's no research behind value-added"; (4) "Using…
Descriptors: Academic Achievement, Standardized Tests, Teacher Evaluation, Evaluation Methods
Kirkhart, Karen E. – New Directions for Evaluation, 2011
Understanding the influence of multisite evaluation requires careful consideration of cultural context. The author illustrates dimensions of influence and culture with excerpts from four National Science Foundation evaluation case studies and summarizes what influence teaches everyone about culture and what culture teaches everyone about…
Descriptors: Evaluation Utilization, Cultural Context, Evaluation Research, Program Evaluation
Palmer, Stuart – Quality in Higher Education, 2012
Student evaluation of teaching is commonplace in many universities and may be the predominant input into the performance evaluation of staff and organisational units. This article used publicly available student evaluation of teaching data to present examples of where institutional responses to evaluation processes appeared to be educationally…
Descriptors: Teacher Effectiveness, Evaluation Methods, Student Evaluation of Teacher Performance, Evaluation Problems
Harrison, Chris – Education in Science, 2012
There is perhaps no subject more contentious in schools than assessment and yet, often, at classroom, school and national level, inferences and decisions are made without much reference to research in this area. In fact, teachers often accept or interpret assessment requirements without question, feeling that assessment has to be approached in a…
Descriptors: Student Evaluation, Educational Assessment, Inferences, Teaching Guides
Fleenor, Andy; Lamb, Sarah; Anton, Jennifer; Stinson, Todd; Donen, Tony – Principal Leadership, 2011
It can be quite alarming (and eye-opening) to see exactly how many of the grades students receive are based on their behaviors rather than their learning. Students should be assessed on what they know and can use rather than on their behavior. The reality, unfortunately, is that the opposite is often the case. Grades for students who work hard are…
Descriptors: Grades (Scholastic), Grading, Educational Practices, Evaluation Problems
Stiggins, Rick – Phi Delta Kappan, 2009
As students enter the upper elementary grades, the emotional dynamics of their assessment experiences begin to affect them in both helpful and harmful ways. Those who experience success on assessments gain a strong sense of academic self-efficacy. Those who experience failure lose their sense of control over their own academic well-being. The…
Descriptors: Self Efficacy, Student Evaluation, Evaluation Research, Evaluation Problems
Miner, Jeremy T. – Research Management Review, 2011
After months of waiting, the grant reviews came back: "excellent," "excellent," and "fair." What?! How can this be? Why is the third review so out of line with the first two? On more than one occasion a principal investigator (PI) has been frustrated not only by a negative funding decision but more so by the accompanying reviewer evaluation forms…
Descriptors: Research Administration, Grants, Feedback (Response), Evaluation Criteria

Peer reviewed
Direct link
