Descriptor
| Program Evaluation | 8 |
| Research Problems | 8 |
| Bias | 3 |
| Evaluation Methods | 3 |
| Error Patterns | 2 |
| Evaluators | 2 |
| Measurement Objectives | 2 |
| Research Design | 2 |
| Research Reviews… | 2 |
| Administrative Problems | 1 |
| Administrator Attitudes | 1 |
| More ▼ | |
Source
| Evaluation Quarterly | 8 |
Author
| Cochran, Nancy | 1 |
| Cook, Thomas D. | 1 |
| Cox, Gary B. | 1 |
| Director, Steven M. | 1 |
| Goldman, Jerry | 1 |
| Goodrich, Thelma Jean | 1 |
| Gorry, G. Anthony | 1 |
| Gruder, Charles L. | 1 |
| Rossi, Peter H. | 1 |
| Scheirer, Mary Ann | 1 |
Publication Type
| Information Analyses | 1 |
| Journal Articles | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Goldman, Jerry – Evaluation Quarterly, 1977
This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)
Descriptors: Program Evaluation, Research Design, Research Problems, Sampling
Cox, Gary B. – Evaluation Quarterly, 1977
Managerial style has implications for program evaluators who wish teir work to be utilized in the decision-making process. This article characterizes managerial behavior in general, and draws some inferences as to how utilization would proceed and how it might be increased. (Author/CTM)
Descriptors: Administrator Characteristics, Information Utilization, Organizational Communication, Organizational Theories
Rossi, Peter H. – Evaluation Quarterly, 1978
There is general agreement that human services (i.e., services that depend on direct interpersonal contact between a deliverer and a client) are difficult to evaluate. The author points out some sources of this difficulty and proposes a strategy for the evaluation of human services delivery. (Author/GDC)
Descriptors: Delivery Systems, Evaluation Methods, Human Services, Intervention
Scheirer, Mary Ann – Evaluation Quarterly, 1978
Evaluation data frequently do not confirm program administrators' and recipients' perceptions of benefits; positive perceptions will occur without regard to actual behavioral and cognitive social psychological theory. Implications are drawn for program planning and for evaluation methodology. (Author/CTM)
Descriptors: Administrator Attitudes, Attitude Change, Behavior Change, Bias
Gorry, G. Anthony; Goodrich, Thelma Jean – Evaluation Quarterly, 1978
When participants with varied background and interests join in a collaborative activity, their different viewpoints may make the evaluation of the activity more difficult. Experience evaluating a multidisciplinary biomedical research center illustrates the influence of values on program evaluation. (Author/GDC)
Descriptors: Administrative Problems, Evaluation Criteria, Evaluation Methods, Measurement Objectives
Cochran, Nancy – Evaluation Quarterly, 1978
Distortion and selective disclosure limit data that are available to program evaluators, producing a bias that tends to maintain the status quo. Paradoxically, attempts to objectify the data only increase the potential for distortion. Ironically, one way to encourage innovation may be to not measure it. Additional solutions are suggested.…
Descriptors: Bias, Data Collection, Disclosure, Error Patterns
Cook, Thomas D.; Gruder, Charles L. – Evaluation Quarterly, 1978
Four projects aimed at evaluating the technical quality of recent summative evaluations are discussed, and models of metaevaluation are presented. Common technical problems are identified and practical methods for solving these problems are outlined, but these methods are limited by the current state of the art. (Author/CTM)
Descriptors: Consultants, Data Analysis, Evaluators, Meta Evaluation
Director, Steven M. – Evaluation Quarterly, 1979
A review of the literature suggests that choice of control group may have affected the policy implications of the major evaluations of governmental training programs. It is argued that the usual evaluation designs underadjust for preprogram differences between trainees and the control group and thus yield biased estimates of program impact.…
Descriptors: Analysis of Covariance, Analysis of Variance, Bias, Control Groups


