Statistical power for cluster analysis

Edwin S Dalmaijer*, Camilla L Nord, Duncan E Astle

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

15 Citations (Scopus)
76 Downloads (Pure)

Abstract

BACKGROUND: Cluster algorithms are gaining in popularity in biomedical research due to their compelling ability to identify discrete subgroups in data, and their increasing accessibility in mainstream software. While guidelines exist for algorithm selection and outcome evaluation, there are no firmly established ways of computing a priori statistical power for cluster analysis. Here, we estimated power and classification accuracy for common analysis pipelines through simulation. We systematically varied subgroup size, number, separation (effect size), and covariance structure. We then subjected generated datasets to dimensionality reduction approaches (none, multi-dimensional scaling, or uniform manifold approximation and projection) and cluster algorithms (k-means, agglomerative hierarchical clustering with Ward or average linkage and Euclidean or cosine distance, HDBSCAN). Finally, we directly compared the statistical power of discrete (k-means), "fuzzy" (c-means), and finite mixture modelling approaches (which include latent class analysis and latent profile analysis).

RESULTS: We found that clustering outcomes were driven by large effect sizes or the accumulation of many smaller effects across features, and were mostly unaffected by differences in covariance structure. Sufficient statistical power was achieved with relatively small samples (N = 20 per subgroup), provided cluster separation is large (Δ = 4). Finally, we demonstrated that fuzzy clustering can provide a more parsimonious and powerful alternative for identifying separable multivariate normal distributions, particularly those with slightly lower centroid separation (Δ = 3).

CONCLUSIONS: Traditional intuitions about statistical power only partially apply to cluster analysis: increasing the number of participants above a sufficient sample size did not improve power, but effect size was crucial. Notably, for the popular dimensionality reduction and clustering algorithms tested here, power was only satisfactory for relatively large effect sizes (clear separation between subgroups). Fuzzy clustering provided higher power in multivariate normal distributions. Overall, we recommend that researchers (1) only apply cluster analysis when large subgroup separation is expected, (2) aim for sample sizes of N = 20 to N = 30 per expected subgroup, (3) use multi-dimensional scaling to improve cluster separation, and (4) use fuzzy clustering or mixture modelling approaches that are more powerful and more parsimonious with partially overlapping multivariate normal distributions.

Original languageEnglish
Article number205
Number of pages28
JournalBMC Bioinformatics
Volume23
Issue number1
DOIs
Publication statusPublished - 31 May 2022

Bibliographical note

Funding Information:
ESD and DEA are supported by grant TWCF0159 from the Templeton World Charity Foundation to DEA. CLN is supported by an AXA Fellowship. All authors are supported by the UK Medical Research Council, Grant MC-A0606-5PQ41.

Publisher Copyright:
© 2022, The Author(s).

Keywords

  • Statistical Power
  • Dimensionality Reduction
  • Cluster Analysis
  • Latent Class Analysis
  • Latent Profile Analysis
  • Simulation
  • Sample Size
  • Effect Size
  • Covariance

Fingerprint

Dive into the research topics of 'Statistical power for cluster analysis'. Together they form a unique fingerprint.

Cite this