Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 11 |
Descriptor
| Evaluation Methods | 33 |
| Evaluation Problems | 33 |
| Research Design | 33 |
| Research Methodology | 15 |
| Program Evaluation | 12 |
| Evaluation Research | 8 |
| Data Collection | 6 |
| Research Problems | 6 |
| Evaluation Criteria | 5 |
| Evaluation Utilization | 5 |
| Measurement Techniques | 5 |
| More ▼ | |
Source
Author
| Baker, Eva L. | 2 |
| Glazerman, Steven | 2 |
| Alkin, Marvin C. | 1 |
| Bamberger, Michael | 1 |
| Bednarz, Dan | 1 |
| Blinn-Pike, Lynn M. | 1 |
| Briefel, Ronette | 1 |
| Bukoski, William J. | 1 |
| Bybee, Deborah | 1 |
| Cahan, Sorel | 1 |
| Cashin, William E. | 1 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 3 |
| Adult Education | 2 |
| Postsecondary Education | 2 |
| Elementary Secondary Education | 1 |
Audience
| Researchers | 2 |
Location
| United Kingdom | 2 |
| California | 1 |
| Israel | 1 |
Laws, Policies, & Programs
| Stewart B McKinney Homeless… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Hagans, Kristi S.; Powers, Kristin – Action in Teacher Education, 2015
The Council for the Accreditation of Educator Preparation (CAEP) requires faculty from educator preparation programs to provide evidence of credential candidates' impact on K-12 student learning. However, there is a paucity of information on preparation programs' use of direct assessments of student learning to gauge credential candidate…
Descriptors: Credentials, Academic Achievement, Teacher Effectiveness, Teacher Certification
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Fryer, Marilyn – Creativity Research Journal, 2012
This article explores a number of key issues with regard to the measurement of creativity in the course of conducting psychological research or when applying various evaluation measures. It is argued that, although creativity is a fuzzy concept, it is no more difficult to investigate than other fuzzy concepts people tend to take for granted. At…
Descriptors: Creativity, Educational Research, Psychological Studies, Evaluation Methods
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Killeen, Peter R. – Psychological Methods, 2010
Lecoutre, Lecoutre, and Poitevineau (2010) have provided sophisticated grounding for "p[subscript rep]." Computing it precisely appears, fortunately, no more difficult than doing so approximately. Their analysis will help move predictive inference into the mainstream. Iverson, Wagenmakers, and Lee (2010) have also validated…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Design, Research Methodology
Lecoutre, Bruno; Lecoutre, Marie-Paule; Poitevineau, Jacques – Psychological Methods, 2010
P. R. Killeen's (2005a) probability of replication ("p[subscript rep]") of an experimental result is the fiducial Bayesian predictive probability of finding a same-sign effect in a replication of an experiment. "p[subscript rep]" is now routinely reported in "Psychological Science" and has also begun to appear in…
Descriptors: Research Methodology, Guidelines, Probability, Computation
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Sridharan, Sanjeev – American Journal of Evaluation, 2008
This article describes the design and evaluation approaches to address the complexity posed by systems change initiatives. The role of evaluations in addressing the following issues is briefly reviewed: moving from strategic planning to implementation, impacts on system-level coordination, anticipated timeline of impact, and individual level…
Descriptors: Strategic Planning, Case Studies, Reader Response, Evaluation Methods
House, Ernest R. – American Journal of Evaluation, 2008
Drug studies are often cited as the best exemplars of evaluation design. However, many of these studies are seriously biased in favor of positive findings for the drugs evaluated, even to the point where dangerous effects are hidden. In spite of using randomized designs and double blinding, drug companies have found ways of producing the results…
Descriptors: Integrity, Evaluation Methods, Program Evaluation, Experimenter Characteristics
Gugiu, P. Cristian – Journal of MultiDisciplinary Evaluation, 2007
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Descriptors: Measurement, Evaluation Methods, Evaluation Problems, Error of Measurement
Bamberger, Michael; White, Howard – Journal of MultiDisciplinary Evaluation, 2007
The purpose of this article is to extend the discussion of issues currently being debated on the need for more rigorous program evaluation in educational and other sectors of research, to the field of international development evaluation, reviewing the different approaches which can be adopted to rigorous evaluation methodology and their…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Convergent Thinking
Peer reviewedLeukefeld, Carl G.; Bukoski, William J. – Journal of Drug Education, 1991
Notes inconsistencies in drug abuse prevention research findings related to such issues as study design and methodology. Presents consensus recommendations made by drug abuse prevention researchers and practitioners who met at National Institute on Drug Abuse in 1989. Includes specific recommendations directed to modifying prevention approaches;…
Descriptors: Drug Abuse, Evaluation Methods, Evaluation Problems, Prevention
Peer reviewedBlinn-Pike, Lynn M.; And Others – Adolescence, 1994
In this pilot study, 14 adolescents kept diaries for 6 consecutive weeks during their pregnancies. Diary entries were analyzed for affective tone, emotional lability, and contextuality. Findings question psychometric properties of data gathered using one time, self-report measures with pregnant adolescents because of their fluctuating mood states.…
Descriptors: Adolescents, Evaluation Methods, Evaluation Problems, Females
Peer reviewedMowbray, Carol T.; Bybee, Deborah; Collins, Mary E.; Levine, Phyllis – Evaluation and Program Planning, 1998
Presents a framework to describe typical constraints that program evaluators face. The framework encompasses the usual evaluation costs and other factors that impede data collection or the implementation of strong evaluation designs. These are characterized as internal/substantive or external/political. (SLD)
Descriptors: Costs, Data Collection, Evaluation Methods, Evaluation Problems
PDF pending restorationBaker, Eva L.; Alkin, Marvin C. – 1984
This chapter discusses current methods of formative evaluation and suggests that these methods themselves need to be subjected to more critical research and evaluation in order to provide them with a more scientific base. Topics addressed include problems and methods associated with data gathering, evaluation designs, number of subjects available,…
Descriptors: Data Collection, Evaluation Methods, Evaluation Problems, Evaluation Utilization

Direct link
