ERIC Number: EJ800124
Record Type: Journal
Publication Date: 2007-Jul
Pages: 12
Abstractor: ERIC
ISBN: N/A
ISSN: ISSN-1556-8180
EISSN: N/A
Available Date: N/A
Why an Active Comparison Group Makes a Difference and What to Do about It
Datta, Lois-ellin
Journal of MultiDisciplinary Evaluation, v4 n7 p1-12 Jul 2007
The Randomized Control Trials (RCT) design and its quasi-experimental kissing cousin, the Comparison Group Trials (CGT), are golden to some and not even silver to others. At the center of the affection, at the vortex of the discomfort, are beliefs about what it takes to establish causality. These designs are considered primarily when the purpose of the evaluation is establishing whether there are outcomes associated with a program and, if so, how confidently the results can be attributed to the program. This article focuses on one after-assignment condition that may notably affect the logic of the RCT and the CGT designs, particularly the central assumption that, all other things being equal, observed differences if any between experimental (E, treatment) and nonexperimental (C, control, comparison) groups are attributable to the treatment. The concern might be characterized as augmentation of the control and experimental groups with relevant non-program services in non-random, potentially biasing, ways. Somewhat more attention is given to the experiences of the C group because of this group's particular significance for the logic of the RCT. Three key points are made: (1) In human service programs, the C groups are likely to be active, rather than passive. Ditto the E groups; (2) It matters if the groups are active, because this can lead to non-random augmentation of services particularly for the Cs but also the Es; and (3) Since the best assumption for human service programs may be an active C group, the evaluator, like Hamlet, should take arms against this sea of troubles both prospectively and retrospectively. In examining the national Head Start evaluation, whose next report of its evaluation, presenting the children's prowess in the first grade, is expected in 2007, this article states that the evaluation may represent the high water mark for an RCT in the context of a mature, widely available national program. It also states that the policy space regarding preschool programs for low-income children made the RCT design an inappropriate application to begin with. (Contains 2 tables.)
Descriptors: Experimental Groups, Preschool Education, National Programs, Disadvantaged Youth, Program Effectiveness, Comparative Analysis, Outcomes of Treatment, Control Groups, Human Services, Early Intervention, Program Evaluation, Evaluation Research, Evaluation Methods, Science Instruction, Television, Programming (Broadcast)
Evaluation Center, Western Michigan University. 1903 West Michigan Avenue, Kalamazoo, MI 49008-5237. Tel: 269-387-5895; Fax: 269-387-5923; e-mail: eval-center@wmich.edu; Web site: http://jmde.com
Publication Type: Journal Articles; Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A