ERIC Number: ED659344
Record Type: Non-Journal
Publication Date: 2023-Sep-29
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
A Knowledge Mobilization Framework: Towards Evidence-Based Statistical Communication Practices in Education Research
Elizabeth Tipton; Katie Fitzgerald
Society for Research on Educational Effectiveness
Background: Since the founding of the Institute of Education Sciences (IES) in 2002, the field of education research has seen impressive progress in its efforts to understand which interventions can effectively improve student outcomes. This success can be seen in the rise of high-quality causal studies -- including both randomized trials and strong quasi-experiments -- which have become the "normal science" in education (Singer, 2019). Over the past twenty years, for example, IES alone has funded over 400 efficacy and effectiveness studies testing interventions in schools in the United States. Over time, researchers focused on Research-Practice Partnerships (RPPs) have called into question if this evidence is useful and used in actual education decision-making. For example, Penuel et al. (2017) showed that over 50% of school district leaders never or only rarely consulted either the What Works Clearinghouse or Regional Education Laboratories when making curricular decisions in schools. In response to these same concerns with the "use and usefulness of education research," the 2022 National Academies of Science, Engineering, and Medicine (NASEM) report on the 'Future of IES' proposed knowledge mobilization as one of five types of research necessary for the field to become more equitable and responsive to the needs of US educational organizations and decision-makers (National Academies of Sciences, Engineering, and Medicine, 2022). Importantly, the report makes clear that its call is not just for a renewed focus on knowledge mobilization efforts, but for an investment in knowledge mobilization as a program of research in and of itself, proposing that "strategies to mobilize knowledge be studied directly" and asserting a need for "developing and testing robust strategies to foster the use of research in varied contexts." Purpose/Research Question: But how might this field of knowledge mobilization studies be structured? This paper seeks to answer this question by arguing the field might benefit from structuring itself around a decision-making taxonomy that divides knowledge mobilization research into three types -- normative, descriptive, prescriptive. We delineate this three-faceted framework and demonstrate how it considers the perspectives and priorities of both researchers and education decision-makers, as well as facilitates healthy feedback loops that examine the norms and practices of both. To situate and illustrate the usefulness of the taxonomy, we focus on one narrow aspect of knowledge mobilization in education: How should statistical evidence be reported and conveyed to facilitate evidence-based decision-making by education practitioners and policy makers? Findings/Results: Taxonomy: We utilize a decision-making taxonomy proposed by Bell et. al. (1988) and adapted to statistical cognition by Beyth-Marom et. al. (2008). We adapt it further for decision-making from statistical evidence in education research more specifically. The taxonomy can be used to divide research on knowledge mobilization in education into three types: (1) Normative: How should education decision-makers evaluate statistical evidence?; (2) Descriptive: How do education decision-makers evaluate statistical evidence?; and (3) Prescriptive: How can we help education decision-makers make better evaluations of statistical evidence? Often, there is a gap between the descriptive and normative use of statistical evidence, between how people do reason about the evidence and how experts believe they should reason about the evidence. We demonstrate several examples of this type of gap in reasoning. Prescriptive research, then, is concerned with developing and testing strategies and mediums of statistical communication that would close that gap and help people reason well about statistical evidence. Case study: To further illustrate this framework, we consider a case study of evidence from the What Works Clearinghouse, of a particular intervention, Cognitive Tutor Algebra I, which has been evaluated by 5 studies that meet WWC standards (Figure 1). Normative questions in this context ask - how should people reason about a collection of studies? What's the appropriate way to make sense of the 6 lines of research presented in Figure 1? What norms are embedded in the way this evidence is presented? Descriptive research is concerned with understanding how education decision-makers reason about and interpret Figure 1. Do they primarily consider the summary of the evidence (Index of 4), or do they make ad hoc judgments and syntheses of the 5 individual studies, perhaps showing preference to studies in contexts "like theirs"? How do they consider studies not included here? How do they interpret the improvement index? Prescriptive research asks: what are effective strategies and means of communication to bridge the gap between the normative and the descriptive? What dashboards and visualizations might facilitate better decision-making by practitioners? What information should be included in those dashboards and visualizations, and how should it be presented? Importantly, bridging this gap is not simply about moving decision-makers in line with the priorities of researchers. For example, if the judgments and priorities of researchers (e.g., internal validity) are not in line with those of decision-makers (e.g., external validity) it may be that an effective intervention changes practices for both groups. Relevant literatures: We point to relevant literature -- in cognitive psychology, statistical cognition, and data visualization regarding the curse of expertise (Birch & Bloom, 2007; Camerer et al., 1989; Xiong et al., 2019), people's statistical misconceptions (e.g. of confidence intervals and error bars), and cognitive pitfalls in reasoning visually about data that should inform our communication practices. We also highlight examples of existing normative, descriptive, and prescriptive research that can serve as models for the types of studies needed to establish a robust knowledge mobilization evidence-base in education. Conclusions: Underlying the call to knowledge mobilization research is a desire -- as an evidence-based field -- to be more evidence-based in our own practices as education researchers. We argue any effective knowledge mobilization enterprise must (1) examine the norms embedded in the evidence we communicate, (2) descriptively understand how a broad range decision-makers reason about this evidence as well as their varied decision-making needs, and (3) prescriptively develop and evaluate communication strategies that facilitate better use of evidence by decision-makers. The normative, descriptive, prescriptive framework offers a way forward for establishing an integrated science that subjects our own practices to the same scrutiny as we do educational interventions themselves.
Descriptors: Evidence Based Practice, Educational Research, Statistics, Decision Making, Curriculum Development, Taxonomy, Communication Strategies
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Related Records: ED661724
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A