Skip to content

Identification and Evaluation of Risk of Generalizability Biases in Pilot versus Efficacy/Effectiveness Trials: A Systematic Review and Meta-Analysis

Research output: Contribution to journalArticle

  • Michael W. Beets
  • R. Glenn Weaver
  • John P. A. Ioannidis
  • Marco Geraci
  • Keith Brazendale
  • Lindsay Decker
  • Anthony D. Okely
  • David Lubans
  • Esther van Sluijs
  • Russell Jagohttp://orcid.org/0000-0002-3394-0176
  • Gabrielle Turner-McGrievy
  • James Thrasher
  • Xiaming Li
  • Andrew J. Milat
Original languageEnglish
Article number19 (2020)
Number of pages20
JournalInternational Journal of Behavioral Nutrition and Physical Activity
Volume17
DOIs
DateAccepted/In press - 23 Jan 2020
DatePublished (current) - 11 Feb 2020

Abstract

Background: Preliminary evaluations of behavioral interventions, referred to as pilot studies, predate the conduct of many large-scale efficacy/effectiveness trial. The ability of a pilot study to inform an efficacy/effectiveness trial relies on careful considerations in the design, delivery, and interpretation of the pilot results to avoid exaggerated early discoveries that may lead to subsequent failed efficacy/effectiveness trials. “Risk of generalizability biases (RGB)” in pilot studies may reduce the probability of replicating results in a larger efficacy/effectiveness trial. We aimed to generate an operational list of potential RGBs and to evaluate their impact in pairs of published pilot studies and larger, more well-powered trial on the topic of childhood obesity.

Methods: We conducted a systematic literature review to identify published pilot studies that had a published larger-scale trial of the same or similar intervention. Searches were updated and completed through December 31st, 2018. Eligible studies were behavioral interventions involving youth (≤18yrs) on a topic related to childhood obesity (e.g., prevention/treatment, weight reduction, physical activity, diet, sleep, screen time/sedentary behavior). Extracted information included study characteristics and all outcomes. A list of 9 RGBs were defined and coded: intervention intensity bias, implementation support bias, delivery agent bias, target audience bias, duration bias, setting bias, measurement bias, directional conclusion bias, and outcome bias. Three reviewers independently coded for the presence of RGBs. Multi-level random effects meta-analyses were performed to investigate the association of the biases to study outcomes.

Results: A total of 39 pilot and larger trial pairs were identified. The frequency of the biases varied: delivery agent bias (19/39 pairs), duration bias (15/39), implementation support bias (13/39), outcome bias (6/39), measurement bias (4/39), directional conclusion bias (3/39), target audience bias (3/39), intervention intensity bias (1/39), and setting bias (0/39). In meta-analyses, delivery agent, implementation support, duration, and measurement bias were associated with an attenuation of the effect size of -0.325 (95CI -0.556 to -0.094), -0.346 (-0.640 to -0.052), -0.342 (-0.498 to -0.187), and -0.360 (-0.631 to -0.089), respectively.

Conclusions: Pre-emptive avoidance of RGBs during the initial testing of an intervention may diminish the voltage drop between pilot and larger efficacy/effectiveness trials and enhance the odds of successful translation.

    Research areas

  • intervention, childhood obesity, youth, physical activity, sleep, diet, screen time, scalability, framework

Download statistics

No data available

Documents

Documents

  • Full-text PDF (final published version)

    Rights statement: This is the final published version of the article (version of record). It first appeared online via BMC at https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-020-0918-y#Abs1 . Please refer to any applicable terms of use of the publisher.

    Final published version, 1.96 MB, PDF document

    Licence: CC BY

DOI

View research connections

Related faculties, schools or groups