Descriptor
| Data Collection | 2 |
| Evaluation Methods | 2 |
| Program Effectiveness | 2 |
| Program Evaluation | 2 |
| Research Design | 2 |
| Time | 2 |
| Decision Making | 1 |
| Evaluation Criteria | 1 |
| Models | 1 |
| Objectives | 1 |
| Reliability | 1 |
| More ▼ | |
Source
| Evaluation and Program… | 2 |
Publication Type
| Journal Articles | 2 |
| Information Analyses | 1 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedStrasser, Stephen; Deniston, O. Lynn – Evaluation and Program Planning, 1978
Factors involved in pre-planned and post-planned evaluation of program effectiveness are compared: (1) reliability and cost of data; (2) internal and external validity; (3) obtrusiveness and threat; (4) goal displacement and program direction. A model to help program administrators decide which approach is more appropriate is presented. (Author/MH)
Descriptors: Data Collection, Decision Making, Evaluation Criteria, Evaluation Methods
Peer reviewedApsler, Robert – Evaluation and Program Planning, 1978
Strasser and Deniston's own analysis (TM 504 254) shows that post-planned evaluations are unsuitable substitutes for pre-planned evaluations. When viewed as post-experimental interviews, however, post-planned evaluations can produce valuable information which complements traditional experimental and quasi-experimental evaluations. (MH)
Descriptors: Data Collection, Evaluation Methods, Objectives, Program Effectiveness


