Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 15 |
| Since 2007 (last 20 years) | 70 |
Descriptor
| Program Evaluation | 583 |
| Research Problems | 583 |
| Evaluation Methods | 218 |
| Research Methodology | 198 |
| Program Effectiveness | 169 |
| Research Design | 134 |
| Educational Research | 107 |
| Elementary Secondary Education | 91 |
| Evaluation Criteria | 64 |
| Research Needs | 64 |
| Models | 56 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| United States | 7 |
| Australia | 5 |
| Canada | 5 |
| Florida | 5 |
| California | 4 |
| District of Columbia | 4 |
| New Zealand | 4 |
| North Carolina | 4 |
| Connecticut | 3 |
| Kentucky | 3 |
| Minnesota | 3 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 1 |
| Expressive One Word Picture… | 1 |
| National Longitudinal Study… | 1 |
| Peabody Picture Vocabulary… | 1 |
| Wechsler Intelligence Scale… | 1 |
| Woodcock Reading Mastery Test | 1 |
What Works Clearinghouse Rating
| Does not meet standards | 3 |
Wandersman, Lois Pall – 1981
Experiences in developing and evaluating a parent group program for new parents are related and problems and strategies for future programs supporting new parents are discussed. The Family Development Parenting Groups (FDPGs) developed in Nashville are also described. Meeting in the infant's second or third month for six weekly and then four…
Descriptors: Adjustment (to Environment), Discussion Groups, Ethics, Objectives
Brickell, Henry M.; And Others – Evaluation Comment, 1976
External political pressures which influence the role and methodology of evaluation are described in the lead article by Henry M. Brickell. These influences, according to the author, are constantly present and are often more powerful in their effect than the actual evaluation findings. Based upon his experiences as an evaluator of educational…
Descriptors: Contracts, Evaluation Methods, Evaluators, Guidelines
Rossi, Peter H. – Evaluation Quarterly, 1978
There is general agreement that human services (i.e., services that depend on direct interpersonal contact between a deliverer and a client) are difficult to evaluate. The author points out some sources of this difficulty and proposes a strategy for the evaluation of human services delivery. (Author/GDC)
Descriptors: Delivery Systems, Evaluation Methods, Human Services, Intervention
Peer reviewedAlemi, Farrokh – Evaluation Review, 1987
Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)
Descriptors: Bayesian Statistics, Case Studies, Evaluation Methods, Program Evaluation
Peer reviewedKytle, Jackson; Millman, Ernest Joel – Evaluation and Program Planning, 1986
This paper focuses on the discrepancy the authors personally experienced between the stated principles of social research and experience with several applied social research projects over the last 10 years. Three cases of applied social research are presented and critiqued, and two types of structural problems were found. (Author/LMO)
Descriptors: Educational Principles, Evaluation Criteria, Program Evaluation, Research Methodology
Peer reviewedBorus, Michael E.; Buntz, Charles G. – Industrial and Labor Relations Review, 1972
Descriptors: Cost Effectiveness, Evaluation Methods, Federal Programs, Labor Force Development
Peer reviewedFrazier, Charles E. – Youth and Society, 1983
Analyzes data from a juvenile diversion program which seem to indicate that as program services increased, the likelihood of participant recidivism increased. Maintains that the unexpected findings resulted from flaws in data recording within the program and suggests that program designs should incorporate evaluation strategies in order to ensure…
Descriptors: Data Collection, Delinquency, Delinquency Prevention, Program Effectiveness
Peer reviewedMagidson, Jay; Sorbom, Dag – Educational Evaluation and Policy Analysis, 1982
LISREL V computer program is applied to a weak quasi-experimental design involving the Head Start program, as a multiple analysis attempt to assure that differences between nonequivalent control groups do not confound interpretation of a posteriori differences. (PN)
Descriptors: Achievement Gains, Early Childhood Education, Mathematical Models, Program Evaluation
Allington, Richard L. – School Administrator, 1997
Although converging evidence favors fostering phonemic segmentation and phonic decoding knowledge in the primary grades, there is little agreement on best ways to accomplish these goals. The well-documented importance of teacher expertise is often ignored. Administrators evaluating reading programs should exercise considerable skepticism and…
Descriptors: Evaluation Criteria, Phonics, Primary Education, Program Evaluation
Peer reviewedAleamoni, Lawrence M. – Journal of Personnel Evaluation in Education, 1990
Characteristics and concerns of faculty development programs are briefly outlined to suggest reasons for the dearth of research in this area of program evaluation. The lack of representative and accurate outcome measures is seen as the central reason behind the lack of research. (TJH)
Descriptors: Colleges, Educational Research, Faculty Development, Outcomes of Education
Boraks, Nancy – Adult Literacy and Basic Education, 1988
Discusses the need for balance in the research and evaluation of adult beginning readers in seven areas: program success and failure, problem definition versus adult competence, reminiscence versus current requirements, call to action versus knowledge, sociological and instructional diagnosis, assumption versus honest appraisal, and reporting on…
Descriptors: Adult Education, Adult Literacy, Adult Reading Programs, Beginning Reading
Walberg, Herbert J.; Greenberg, Rebecca C. – Phi Delta Kappan, 1999
Highlighting Success for All, this article argues that federal funds are being used to support the promulgation and biased evaluation of failed programs. Educators need to beware of conflicts of interest and developers' misleading claims about publicly and privately developed programs. Independent evaluators are needed. (MLH)
Descriptors: Bias, Conflict of Interest, Elementary Education, Federal Aid
Garan, Elaine M. – Phi Delta Kappan, 2001
The National Reading Panel admits its evaluation report on phonics is seriously flawed as to organization, methodology, appropriateness of research base, generalizability of results, reliability, validity, and accuracy of data reported. However, an influential public-relations machine is promoting the study's favorable results as unvarnished…
Descriptors: Elementary Education, Meta Analysis, Phonics, Program Evaluation
Peer reviewedLundgren, Lena; Amodeo, Maryann; Thompson, David C.; Collins, Charles; Ellis, Michael – Evaluation and Program Planning, 1999
This special issue presents results of a multisite, multiyear national demonstration project designed to test the effectiveness of integrating street outreach programs aimed at preventing HIV infection with referral to substance abuse treatment for populations that historically have been difficult to reach (Author/SLD)
Descriptors: Acquired Immune Deficiency Syndrome, Demonstration Programs, Outreach Programs, Prevention
Schochet, Peter; Burghardt, John – Evaluation Review, 2007
This article discusses the use of propensity scoring in experimental program evaluations to estimate impacts for subgroups defined by program features and participants' program experiences. The authors discuss estimation issues and provide specification tests. They also discuss the use of an overlooked data collection design--obtaining predictions…
Descriptors: Program Effectiveness, Scoring, Experimental Programs, Control Groups

Direct link
