Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Outcome Measures | 3 |
| Program Evaluation | 3 |
| Program Effectiveness | 2 |
| Academic Achievement | 1 |
| Cognitive Development | 1 |
| Computation | 1 |
| Control Groups | 1 |
| Correlation | 1 |
| Disadvantaged Youth | 1 |
| Effect Size | 1 |
| Evaluation Methods | 1 |
| More ▼ | |
Source
| Educational Evaluation and… | 3 |
Author
Publication Type
| Journal Articles | 3 |
| Reports - Research | 2 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
| Preschool Education | 1 |
Audience
Location
| Arizona | 1 |
| Colorado | 1 |
| Delaware | 1 |
| Florida | 1 |
| Georgia | 1 |
| Hawaii | 1 |
| North Carolina | 1 |
| South Carolina | 1 |
| Tennessee | 1 |
| Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A. – Educational Evaluation and Policy Analysis, 2016
Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…
Descriptors: Computation, Validity, Program Evaluation, Sampling
Ronfeldt, Matthew; Campbell, Shanyce L. – Educational Evaluation and Policy Analysis, 2016
Despite growing calls for more accountability of teacher education programs (TEPs), there is little consensus about how to evaluate them. This study investigates the potential for using observational ratings of program completers to evaluate TEPs. Drawing on statewide data on almost 9,500 program completers, representing 44 providers (183…
Descriptors: Teacher Education Programs, Program Effectiveness, Program Evaluation, Observation
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D. – Educational Evaluation and Policy Analysis, 2013
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Descriptors: Meta Analysis, Preschool Education, Disadvantaged Youth, Outcome Measures

Peer reviewed
Direct link
