NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
von Davier, Matthias – National Education Policy Center, 2011
The primary claim of this Harvard Program on Education Policy and Governance report and the abridged Education Next version is that nations "that pay teachers on their performance score higher on PISA tests." After statistically controlling for several variables, the author concludes that nations with some form of merit pay system have,…
Descriptors: Evidence, Teacher Salaries, Merit Pay, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Adedokun, Omolola A.; Childress, Amy L.; Burgess, Wilella D. – American Journal of Evaluation, 2011
A theory-driven approach to evaluation (TDE) emphasizes the development and empirical testing of conceptual models to understand the processes and mechanisms through which programs achieve their intended goals. However, most reported applications of TDE are limited to large-scale experimental/quasi-experimental program evaluation designs. Very few…
Descriptors: Feedback (Response), Program Evaluation, Structural Equation Models, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kazi, Mansoor A. F.; Pagkos, Brian; Milch, Heidi A. – Research on Social Work Practice, 2011
Objectives: The purpose of this study was to develop a realist evaluation paradigm in social work evidence-based practice. Method: Wraparound (at Gateway-Longview Inc., New York) used a reliable outcome measure and an electronic database to systematically collect and analyze data on the interventions, the client demographics and circumstances, and…
Descriptors: Intervals, Data Analysis, Social Work, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation
Rosenthal, James A. – Springer, 2011
Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes numerous examples, data sets, and issues that students will encounter in social work practice. The first section introduces basic concepts and terms to…
Descriptors: Statistics, Data Interpretation, Social Work, Social Science Research
Peer reviewed Peer reviewed
Foster, E. Michael; Bickman, Leonard – Evaluation Review, 1996
Simple methods for detecting attrition (nonresponse) in longitudinal evaluations are reviewed, focusing on regression-based analyses of data from a longitudinal evaluation. The approaches are illustrated with data from the Fort Bragg Evaluation, an evaluation of a major demonstration in children's mental health services. (SLD)
Descriptors: Attrition (Research Studies), Children, Demonstration Programs, Dropouts