ERIC Number: ED663550
Record Type: Non-Journal
Publication Date: 2024-Sep-21
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Randomized Controlled Trials of Service Interventions: The Impact of Capacity Constraints
Justin Boutilier; Jonas Jonasson; Hannah Li; Erez Yoeli
Society for Research on Educational Effectiveness
Background: Randomized controlled trials (RCTs), or experiments, are the gold standard for intervention evaluation. However, the main appeal of RCTs--the clean identification of causal effects--can be compromised by interference, when one subject's actions can influence another subject's behavior or outcomes. In this paper, we formalize and study a type of interference that arises due to "capacity constraints" on the intervention and show how capacity constraints can affect experimental outcomes and the interpretation of RCTs. Intervention: We focus on a class of interventions that we call "Service Interventions" (SIs): interventions that include an on-demand service component provided by a costly and capacity constrained resource (e.g., healthcare providers or teachers). As a motivating example, we consider a tuberculosis treatment adherence support program [Yoeli et al. '19], in which patients log their adherence to the treatment plan daily and, upon skipping a day of logging, become eligible for outreach from a human support sponsor who will call and provide a nudge to the patient. Importantly, due to capacity constraints, if the patient load is high then a given patient may not be called immediately and may need to wait for service. The implications of our work extend to any service intervention with a capacity constrained intervention. In the education setting, an example of a service intervention is a program for advisor outreach to at-risk students. Here, the advisor is capacity constrained and may not be able to immediately contact all students who are at-risk. Methods: This paper develops a mathematical model of a service intervention that incorporates user behavior, wait times, and experimentation strategies using techniques from queueing theory. We provide theoretical analysis as well as simulations. Results: We first show that in RCTs of service interventions, the capacity constraints induce dependencies across experiment subjects, where an individual may need to wait before receiving the intervention. By modeling these dependencies using a queueing system, we show how increasing the number of subjects without increasing the capacity of the system can result in a smaller treatment effect size. Moreover, we show that this decrease in effect size is non-linear. When the number of subjects is below a threshold (which depends on capacity), the effect size is constant in the number of subjects. When the number of subjects is above the threshold, the effect size decreases with 1/N, where N denotes the number of subjects. The relationship between the number of subjects and the effect size has implications for conventional power analysis: increasing the sample size of an RCT without appropriately expanding capacity can paradoxically decrease the study's power to detect a positive effect. Thus the capacity constraints may lead experimenters to mistakenly conclude that the protocol is ineffective, if the number of subjects recruited is too high relative to the system capacity. To address this issue and increase the statistical power of the trial, we propose a method to jointly select the system capacity and number of users using the square root staffing rule from queueing theory. We show how incorporating knowledge of the queueing structure can help an experimenter reduce the amount of capacity and number of subjects required while still maintaining high power. Conclusion: Descriptively, our paper serves as a reminder that, at a minimum, the outcomes of such service intervention RCTs should be interpreted as conditional on the implemented service level. In other words, the reason that such a trial shows small or no effect can either be attributed to the protocol itself not being effective (the usual interpretation) or the service level being insufficient. Prescriptively, we provide the experimenter with a framework to jointly optimize the size of the trial (i.e., the number of subjects) and its service level (i.e., the number of servers) in a queueing-informed manner, so as to maximize the probability of detecting a difference between a treatment group and a control group. Additionally, our analysis of congestion-driven interference provides one concrete mechanism to explain why similar protocols can result in different RCT outcomes and why promising interventions at the RCT stage may not perform well at scale.
Descriptors: Randomized Controlled Trials, Intervention, Mathematical Models, Interference (Learning), Patients, Compliance (Psychology), Prompting, Effect Size, Statistical Analysis
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A