NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED656803
Record Type: Non-Journal
Publication Date: 2021-Sep-27
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Communicating Clearinghouse Data: A Statistical Cognition Experiment on Education Practitioners' Understanding of Forest Plots
Katie Fitzgerald; Elizabeth Tipton
Society for Research on Educational Effectiveness
Background: As the body of scientific evidence about what works in education grows, so does the need to effectively communicate that evidence to policy-makers and practitioners. Websites and clearinghouses such as the "What Works Clearinghouse" (WWC), "Evidence for ESSA," "Blueprints for Healthy Youth Development," and the "Education Endowment Foundation" (EEF) have emerged to facilitate the evidence-based decision-making process for these policy-makers and practitioners. These clearinghouses have taken on the non-trivial task of distilling often complex research findings to non-researchers. Among other things, this often involves reporting a small number ([less than or equal to] 5) of effect sizes, their statistical uncertainty, and an overall pooled-effect. In this paper, we present a new visualization for clearinghouse data, called a Meta-Analytic Rain Cloud Plot. In order to develop this plot, we drew on evidence from the data visualization and statistical cognition literatures. Overall, this literature demonstrates that people have poor statistical reasoning skills in general and that statistical misconceptions are widespread and persistent among students, lay people, and researchers (Kühberger et al. 2015; Garfield and Ahlgren 1988; Belia et al. 2005; Correll and Gleicher 2014; Schild and Voracek 2015). Furthermore, many existing plots do not address these misconceptions. Purpose: We conducted a statistical cognition experiment to evaluate the effectiveness of four versions of forest plots in communicating clearinghouse data to education practitioners: the newly proposed Meta-Analytic Rain Cloud (MARC) plot, a horizontal bar plot (used by the What Works Clearinghouse), a conventional forest plot (used by the Campbell Collaboration), and a Rain Forest Plot proposed by Schild et. al. (2015). We hypothesized that education practitioners may have difficulty interpreting forest plots, particularly conventional forest plots and rain forest plots, but that the new visualization would lead to the most accurate understanding. We used a sample of education researchers as a point of comparison and hypothesized that they would be able to interpret forest plots better than practitioners overall. Setting: The study was completed online by participants located in the US. Participants: We recruited a sample of n = 83 education practitioners and n = 94 education researchers. Practitioners had at least a Bachelor's degree and self-identified as "educators or education decision-makers employed in the US PreK-12 education system." Education researchers are those who indicated they have a doctorate degree or are currently enrolled in a doctoral program, and conduct research relevant to education at a university, think tank, or research organization. Intervention: The randomized stimuli in this experiment were data visualizations that displayed effect sizes from a set of 5 studies and their meta-analytic summary estimate. The four visualization types can be viewed in the supplementary material. Each participant viewed 4 visualizations and answered the same set of questions about each. There were 7 objective questions requiring them to extract important meta-analytic information from the visualization (e.g. the study that received the most weight and the magnitude of the summary effect) as well as 2 questions to indicate their subjective beliefs about the strength of the evidence. The experiment was conducted via Qualtrics and had a median completion time of 13 minutes. Research Design: We used a 4*2*2 factorial design with factors for visualization type (Factor A, 4 levels), statistical significance of the meta-analytic summary effect (Factor B, 2 levels), and magnitude of the meta-analytic summary effect (Factor C, 2 levels). We partially confounded the 16 treatment combinations in 4 blocks, so that each person is randomly assigned to a block and views a total of 4 visualizations each. We chose 3 different confounding patterns such that visualization type was fully estimable within people and each factor and interaction was estimable in at least some of the replicates. Data Collection and Analysis: Data was collected online using Qualtrics and cleaned and analyzed using R Studio. All primary analyses were pre-registered with OSF, and the Type I error rate for each analysis was corrected for multiple comparisons and controlled at the level alpha = 0.05. To assess practitioners' ability to accurately interpret forest plots, we descriptively considered the proportions of respondents able to answer each of the objective questions correctly. To compare the effectiveness of the four visualizations, we define y[subscript irst] to be a sum score (range 0-7) for the number of Questions 1-7 individual i (i = 1, 2, ... n) answered correctly when viewing the visualization condition "rst" where the three experimental factors are indexed by r = 1, 2, 3, 4; s = 1, 2; and t = 1, 2 respectively. We model the outcome y with an ANOVA model that includes an overall mean, main effects for factors A, B, and C, all two- and three-way interactions between factors, a blocking factor for individuals, and an individual error term. We used Tukey's test for the six pairwise comparisons between the 4 levels of Factor A. We conducted 4 one-sided two-sample t-tests to determine if education practitioners score lower on average than researchers for each visualization type. Results: We found that compared to the three other visualizations used in practice, the MARC plot is more effective in helping participants correctly interpret evidence (Figure 1). It offered a 0.76 standard deviation improvement in participant scores compared to the two forest plots and a 0.43 standard deviation improvement over the bar plot (each p < 0.05, adjusted for multiple comparisons). Researchers scored better than practitioners in every case, but the MARC plot brought practitioner scores to a level comparable to researcher scores on traditional displays (Figure 2). Conclusions: The use of bar plots and confidence interval bars in traditional displays led to poor meta-analytic reasoning, but the design of the MARC plot overcame many of these cognitive difficulties. To our knowledge, this is one of the first studies providing evidence regarding how to best present the type of information found in clearinghouses to people who have little to no statistical training. Our results serve as a caution about the curse of expertise and reminder that care should be taken not to assume consumers will interpret information the way experts intend.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A