ERIC Number: ED656926
Record Type: Non-Journal
Publication Date: 2021-Sep-27
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Evidence-Based Decisions and Education Policymakers
Nozomi Nakajima
Society for Research on Educational Effectiveness
Background: In today's era of evidence-based policymaking, policymakers face pressure to use research evidence to inform their decisions. This is particularly salient in the United States, where federal law mandates that education leaders implement policies, programs, and strategies that have been demonstrated to improve student outcomes. Despite the strong push to integrate research evidence in policymaking, we know little about the mental models used by education policymakers when making evidence-based decisions. Research Questions: My study addresses two research questions. First, what preferences do policymakers have for research evidence? Researchers studying education policies typically focus on establishing internal validity. However, the same policies can have different impacts for different populations and policies that are effective in small trials may not be as effective when implemented at scale. These issues raise the question of external validity. Policymakers face the unique task of evaluating whether research findings are relevant to their specific local context. Thus, I seek to understand how policymakers evaluate the internal and external validity of research evidence when making policy decisions. Second, how do policymakers update their beliefs about the effectiveness of education policies? Policymakers are implicitly asked to predict how well policies will work in their local contexts. But how do they form these predictions? Education research on policymakers often highlights the complexity of social and political processes in decision-making, but few have empirically examined the cognitive aspects of belief formation. Setting: I partner with an education organization that offers professional development courses to policymakers. Policymakers are invited to participate in my online study when they enroll in a professional development course between October 2020 and June 2021. All participants complete the survey and experiments on Qualtrics. Evidence-based decisions and education policymakers. Participants: My study includes 2,000 education policymakers working in state and local education agencies across the United States. Program: Figure 1 below shows an overview of the study. The online survey takes approximately 20 minutes to complete. The survey consists of a background section, followed by two experiments. Details of the two experiments are explained in the next section. Research Design: To study policymaker preferences for research evidence, I embed a discrete choice experiment in my survey. Policymakers are presented with hypothetical scenarios in which they evaluate different research evidence about charter schools to help guide policy decisions in their own local setting. Each scenario contains two potential research studies, which randomly vary along aspects of internal and external validity with the intent of creating realistic variation of study attributes. These attributes take on several levels, as shown in the table in Figure 1. In each scenario, policymakers are asked to choose the research evidence that is most informative for making their policy decision. To study how policymakers update their beliefs about policy effectiveness, I embed an information experiment in my survey. First, I elicit policymakers' prior beliefs by asking them to predict the effect of charter schools in an urban context. Then much later in the survey, policymakers are randomly assigned to one of four information groups. The first group sees the prediction made by other education policymakers (i.e., peer treatment). The second group sees the prediction made by researchers (i.e., researcher treatment). The third group also sees the researcher prediction but with additional explanation about how researchers derived their prediction (i.e., researcher plus treatment). The fourth group is a control, which receives no information. At the end of the survey, I re-elicit policymakers' prediction for the effect of charter schools in an urban context. In doing so, I am able to examine how policymakers update their beliefs in response to new information from peers and researchers. Analysis: In the discrete choice experiment, I regress policymaker i's selection of study j on a vector of study attributes shown to respondents in the choice task as follows: The coefficient of interest is and the standard errors are clustered at the individual policymaker level. For ease of interpretation, my main specification estimates equation (1) using a linear probability model. For the information experiment, I use a simple Bayesian learning model. Policymaker has a prior belief about the policy's effect. In the information experiment, the policymaker receives information related to the policy. Then, the policymaker's posterior belief is:. That is, can take a value from 0 (policymaker ignores the information) to 1 (policymaker fully adjusts to the information). Thus, the slope between and can be used to estimate the rate at which policymakers update their beliefs. To separate true learning from spurious mean reversion, I leverage the random assignment to different information sources in my experiment and fit the following regression specification: (2) where denotes whether the policymaker received the information. The coefficient of interest is which captures the true learning rate. Results: First, I find that policymakers have clear and strong preferences for research evidence. To inform their decisions, policymakers are significantly more likely to use studies that have large sample sizes and studies conducted in multiple sites. However, they do not distinguish between evidence from observational data and randomized control trials. In terms of external validity, policymakers are significantly more likely to prefer study contexts that are similar to their own contexts in terms of urbanicity and student poverty rates, but not to student racial composition. Second, I find that education policymakers respond differently to information from peer policymakers and researchers. Policymakers are significantly more likely to update their priors when encountered with information from researchers than from peers. Using a follow-up survey, I show that this effect is not simply driven by experimenter demand or numerical anchoring, as the learning effects persist for those who received researcher forecasts. Conclusions: My study highlights the mental models used by education policymakers when making evidence-informed decisions. By using experimental methods to examine 2,000 policymakers tasked with making evidence-based decisions in districts and states across the country, this paper contributes to our broader understanding of evidence-based policymaking in education.
Descriptors: Evidence Based Practice, Decision Making, Educational Policy, Policy Formation, Research Utilization, Preferences, Professional Development, State Departments of Education, School Districts
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A