Background Meta-analysts and users of meta-analysis software often seek to explain between-study heterogeneity by splitting studies into subgroups. The usual initial question is whether there is statistical evidence that the effect size differs between subgroups, which calls for a statistical test for subgroup effects. Objectives (1) To outline the different methods available for testing for subgroup effects; (2) To illustrate three methods using an example meta-analysis from the literature; (3) To review theoretical and empirical evidence on the validity of these methods; (4) To compare the availability of methods in some common software packages, and the recommendations of some standard textbooks Results (1) There are three main methods: (a) Fixed-methods based on an ANOVA-like partitioning of the heterogeneity statistic Q; (b) Random-effects (mixed-effects) meta-regression; (c) Bucher’s method for adjusted indirect comparisons, usually applied to two groups extendable to more. (2) In the example, the fixed-effect method suggested evidence for a subgroup effect (p=0.014), while the other two methods did not (p>0.2). (3) Both theoretical and empirical evidence shows that the fixed-effect method gives inflated Type I error rates whenever there remains unexplained (residual) between-study heterogeneity, even when the residual heterogeneity is not statistically significant (4) Software packages vary in availability of the first two methods and in the default method if both are offered; none offered Bucher’s method. Recommendations in textbooks vary widely. Conclusions Tests for subgroup effects based on fixed-effect methods are often misleading and should not be used. Software for meta-analysis should reflect this.
|Translated title of the contribution||Should we stop using fixed-effect methods to test for subgroup effects?|
|Title of host publication||Third Annual Meeting of The Society for Research Synthesis Methodology, Corfu, Greece|
|Publication status||Published - 2008|