NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED656936
Record Type: Non-Journal
Publication Date: 2021-Sep-28
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Fixed Effect and Random-Effects Meta-Analysis of Randomized Trials When the Outcome Is Continuous and the Number of Studies Is Small
Rrita Zejnullahi
Society for Research on Educational Effectiveness
Background: Meta-analysis is considered to be the gold standard for evidence synthesis. It involves combining data from multiple independent sources to produce a summary estimate with improved precision. Traditionally, meta-analysis methods have been applied to a large collection of studies, and past research efforts have indicated its numerous strengths (Hedges and Olkin, 1985; Hedges and Vevea, 1998). Working groups such as the "What Works Clearinghouse" in education, for instance, use meta-analysis methods to synthesize evidence from randomized control trials to produce the highest level of evidence for policymakers and practitioners. Increasingly, however, we observe that the number of studies eligible for syntheses is quite small (i.e. less than 10), and it is not uncommon to combine information from only two studies. In situations where the goal is to promote evidence-based decisions, formally addressing the problems we encounter in small meta-analyses is critical. In this paper, we first review the conceptual framework of fixed and random effects procedures and discuss the advantages and disadvantages of each in the context of meta-analysis with few studies. Second, we compare the performance of several random effects methods used in practice, which have typically been discussed in isolation. We consider performance by looking at properties of the weighted mean, properties of several variance estimators and the coverage probability of confidence intervals for the overall treatment effect. Third, because the performance of random effects procedures depends largely on the ability to estimate the variance of the summary estimate well, we suggest and evaluate the use of alternative variance estimators in this small sample meta-analytic setting. Purpose The purpose of this paper is to provide advice to practitioners about best practices for use of fixed and random effects procedures in small meta-analyses. Specifically, we seek to answer: (1) What are the trade-offs of applying fixed versus random-effects meta-analysis models to a small collection of studies? (2) To what extent do inferences differ based on choice of method under a random effects framework? and (3) What are the implications of unbalancedeness among primary studies on random effects meta-analysis? Results To investigate the performance of random effects procedures in small samples, we conduct simulations that reflect scenarios commonly encountered in practice, i.e. when the number of studies is small (k is less than 10) and under several configurations of unbalancedeness (varying n). The simulations are based on random effects procedures only. This is because fixed effects procedures keep coverage rates at the nominal value since we assume that the within-study variances are known population parameters. Figure 1 reports the coverage probability of 95% confidence intervals for the true mean using three alternative variance estimators (HC1, HC2, and HC3) along with a t-distribution with (k-1) degrees of freedom. We find that the method that uses HC3 is superior to all other methods and keeps coverage rates approximately at the nominal value for all k, including the extreme case of k=2. Conclusion: Meta-analyses of a small number of studies are quite common in practice, yet little evidence is available about best practices for use of meta-analysis methods in this context. Our primary goal in this paper was to contribute to this gap in the literature by exploring the feasibility of applying random effects procedures to a small collection of studies. We demonstrated that the commonly used random effects methods result in confidence intervals that are too narrow in cases of quite small k and unequal within-study sample sizes among primary studies. Our simulations on the various random effects procedures suggest that the method that uses the HC3 variance estimator along with a t-distribution with (k-1) degrees of freedom for inference outperforms all other methods, providing coverage rates near 95% for small to large heterogeneity and different scenarios of unbalancedeness, including the extreme case of Although our simulations cover only a handful of cases for unbalancedeness, the cases do cover scenarios that are likely to occur in practice and are thus useful for practice.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A
Author Affiliations: N/A