NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A. – Educational Evaluation and Policy Analysis, 2016
Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…
Descriptors: Computation, Validity, Program Evaluation, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Ronfeldt, Matthew; Campbell, Shanyce L. – Educational Evaluation and Policy Analysis, 2016
Despite growing calls for more accountability of teacher education programs (TEPs), there is little consensus about how to evaluate them. This study investigates the potential for using observational ratings of program completers to evaluate TEPs. Drawing on statewide data on almost 9,500 program completers, representing 44 providers (183…
Descriptors: Teacher Education Programs, Program Effectiveness, Program Evaluation, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D. – Educational Evaluation and Policy Analysis, 2013
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Descriptors: Meta Analysis, Preschool Education, Disadvantaged Youth, Outcome Measures