Skip to content

Cost-sensitive boosting algorithms: Do we really need them?

Research output: Contribution to journalArticle

Original languageEnglish
Pages (from-to)359-384
Number of pages26
JournalMachine Learning
Volume104
Issue number2
Early online date2 Aug 2016
DOIs
DateAccepted/In press - 24 Jun 2016
DateE-pub ahead of print - 2 Aug 2016
DatePublished (current) - 1 Sep 2016

Abstract

We provide a unifying perspective for two decades of work on cost-sensitive Boosting algorithms. When analyzing the literature 1997–2016, we find 15 distinct cost-sensitive variants of the original algorithm; each of these has its own motivation and claims to superiority—so who should we believe? In this work we critique the Boosting literature using four theoretical frameworks: Bayesian decision theory, the functional gradient descent view, margin theory, and probabilistic modelling. Our finding is that only three algorithms are fully supported—and the probabilistic model view suggests that all require their outputs to be calibrated for best performance. Experiments on 18 datasets across 21 degrees of imbalance support the hypothesis—showing that once calibrated, they perform equivalently, and outperform all others. Our final recommendation—based on simplicity, flexibility and performance—is to use the original Adaboost algorithm with a shifted decision threshold and calibrated probability estimates.

    Research areas

  • Boosting, Class imbalance, Classifier calibration, Cost-sensitive

    Structured keywords

  • Jean Golding

Download statistics

No data available

Documents

Documents

  • Full-text PDF (final published version)

    Rights statement: This is the final published version of the article (version of record). It first appeared online via Springer at http://link.springer.com/article/10.1007%2Fs10994-016-5572-x. Please refer to any applicable terms of use of the publisher.

    Final published version, 1 MB, PDF document

    Licence: CC BY

DOI

View research connections

Related faculties, schools or groups