Investigating and dealing with publication bias and other reporting biases in meta-analyses: a review

Matthew J Page*, Jonathan A C Sterne, Julian P T Higgins, Matthias Egger

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

121 Citations (Scopus)
113 Downloads (Pure)

Abstract

A P value, or the magnitude or direction of results can influence decisions about whether, when, and how research findings are disseminated. Regardless of whether an entire study or a particular study result is unavailable because investigators considered the results to be unfavorable, bias in a meta‐analysis may occur when available results differ systematically from missing results. In this article, we summarize the empirical evidence for various reporting biases that lead to study results being unavailable for inclusion in systematic reviews, with a focus on health research. These biases include publication bias and selective nonreporting bias. We describe processes that systematic reviewers can use to minimize the risk of bias due to missing results in meta‐analyses of health research, such as comprehensive searches and prospective approaches to meta‐analysis. We also outline methods that have been designed for assessing risk of bias due to missing results in meta‐analyses of health research, including using tools to assess selective nonreporting of results, ascertaining qualitative signals that suggest not all studies were identified, and generating funnel plots to identify small‐study effects, one cause of which is reporting bias.

Highlights
Bias in a meta‐analysis may occur when available results differ systematically from missing results.
Several different tools, plots, and statistical methods have been designed for assessing risk of bias due to missing results in meta‐analyses. These include comparison of prespecified analysis plans with completed reports to detect selective nonreporting of results, consideration of qualitative signals that suggest not all studies were identified, and the use of funnel plots to identify small‐study effects, for which reporting bias is one of several causes.
Information from approaches such as funnel plots and selection models is more difficult to interpret than from less subjective approaches such as detection of incompletely reported results in studies for which prespecified analysis plans were available.
Original languageEnglish
Number of pages12
JournalResearch Synthesis Methods
Early online date9 Nov 2020
DOIs
Publication statusE-pub ahead of print - 9 Nov 2020

Keywords

  • Publication bias
  • Systematic review
  • Meta-analysis
  • Reporting

Fingerprint

Dive into the research topics of 'Investigating and dealing with publication bias and other reporting biases in meta-analyses: a review'. Together they form a unique fingerprint.

Cite this