NotesFAQContact Us
Collection
Advanced
Search Tips
Source
American Journal of Evaluation20
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dahlia K. Remler; Gregg G. Van Ryzin – American Journal of Evaluation, 2025
This article reviews the origins and use of the terms quasi-experiment and natural experiment. It demonstrates how the terms conflate whether variation in the independent variable of interest falls short of random with whether researchers find, rather than intervene to create, that variation. Using the lens of assignment--the process driving…
Descriptors: Quasiexperimental Design, Research Design, Experiments, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Debbie L. Hahs-Vaughn; Christine Depies DeStefano; Christopher D. Charles; Mary Little – American Journal of Evaluation, 2025
Randomized experiments are a strong design for establishing impact evidence because the random assignment mechanism theoretically allows confidence in attributing group differences to the intervention. Growth of randomized experiments within educational studies has been widely documented. However, randomized experiments within education have…
Descriptors: Educational Research, Randomized Controlled Trials, Research Problems, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Tipton, Elizabeth – American Journal of Evaluation, 2022
Practitioners and policymakers often want estimates of the effect of an intervention for their local community, e.g., region, state, county. In the ideal, these multiple population average treatment effect (ATE) estimates will be considered in the design of a single randomized trial. Methods for sample selection for generalizing the sample ATE to…
Descriptors: Sampling, Sample Size, Selection, Randomized Controlled Trials
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Tipton, Elizabeth; Matlen, Bryan J. – American Journal of Evaluation, 2019
Randomized control trials (RCTs) have long been considered the "gold standard" for evaluating the impacts of interventions. However, in most education RCTs, the sample of schools included is recruited based on convenience, potentially compromising a study's ability to generalize to an intended population. An alternative approach is to…
Descriptors: Randomized Controlled Trials, Recruitment, Educational Research, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Demby, Hilary; Jenner, Lynne; Gregory, Alethia; Jenner, Eric – American Journal of Evaluation, 2020
Despite the increase in federal tiered evidence initiatives that require the use of rigorous evaluation designs, such as randomized experiments, there has been limited guidance in the evaluation literature on practical strategies to implement such studies successfully. This paper provides lessons learned in executing experiments in applied…
Descriptors: Randomized Controlled Trials, Evaluation, Experiments, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M. – American Journal of Evaluation, 2018
Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…
Descriptors: Randomized Controlled Trials, Program Evaluation, Program Effectiveness, Family Violence
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R. – American Journal of Evaluation, 2018
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…
Descriptors: Research Design, Randomized Controlled Trials, Experimental Groups, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Karras-Jean Gilles, Juliana; Astuto, Jennifer; Gjicali, Kalina; Allen, LaRue – American Journal of Evaluation, 2019
Secondary data analysis was employed to scrutinize factors affecting sample retention in a randomized evaluation of an early childhood intervention. Retention was measured by whether data were collected at 3 points over 2 years. The participants were diverse, immigrant, and U.S.-born families of color from urban, low-income communities. We…
Descriptors: Early Childhood Education, Intervention, Persistence, Recruitment
Peer reviewed Peer reviewed
Direct linkDirect link
Raudenbush, Stephen W.; Bloom, Howard S. – American Journal of Evaluation, 2015
The present article provides a synthesis of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from a distribution of heterogeneous program impacts across individuals and/or program sites. Learning "about" such a distribution involves estimating its mean value, detecting and quantifying…
Descriptors: Program Effectiveness, Randomized Controlled Trials, Statistical Distributions, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Chaney, Bradford – American Journal of Evaluation, 2016
The primary technique that many researchers use to analyze data from randomized control trials (RCTs)--detecting the average treatment effect (ATE)--imposes assumptions upon the data that often are not correct. Both theory and past research suggest that treatments may have significant impacts on subgroups even when showing no overall effect.…
Descriptors: Randomized Controlled Trials, Data Analysis, Outcomes of Treatment, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Page, Lindsay C.; Feller, Avi; Grindal, Todd; Miratrix, Luke; Somers, Marie-Andree – American Journal of Evaluation, 2015
Increasingly, researchers are interested in questions regarding treatment-effect variation across partially or fully latent subgroups defined not by pretreatment characteristics but by postrandomization actions. One promising approach to address such questions is principal stratification. Under this framework, a researcher defines endogenous…
Descriptors: Statistical Analysis, Program Effectiveness, Randomized Controlled Trials, Social Science Research
Previous Page | Next Page ยป
Pages: 1  |  2