Classifier Calibration

Research output: Chapter in Book/Report/Conference proceedingEntry for encyclopedia/dictionary


Classifier calibration is concerned with the scale on which a classifier’s scores are expressed. While a classifier ultimately maps instances to discrete classes, it is often beneficial to decompose this mapping into a scoring classifier which outputs one or more real-valued numbers and a decision rule which converts these numbers into predicted classes. For example, a linear classifier might output a positive or negative score whose magnitude is proportional to the distance between the instance and the decision boundary, in which case the decision rule would be a simple threshold on that score. The advantage of calibrating these scores to a known, domain-independent scale is that the decision rule then also takes a domain-independent form and does not have to be learned. The best-known example of this occurs when the classifier’s scores approximate, in a precise sense, the posterior probability over the classes; the main advantage of this is that the optimal decision rule is to predict the class that minimizes expected cost averaged over all possible true classes.The main methods to obtain calibrated scores are logistic calibration, which is a parametric method that assumes that the distances on either side of the decision boundary are normally distributed and a nonparametric alternative that is variously known as isotonic regression, the pool adjacent violators (PAV) method or the ROC convex hull (ROCCH) method.
Original languageEnglish
Title of host publicationEncyclopedia of Machine Learning and Data Mining
EditorsClaude Sammut, Geoffrey I Webb
PublisherSpringer US
Number of pages8
ISBN (Electronic)9781489975027
Publication statusPublished - 12 Aug 2016

Structured keywords

  • Jean Golding


Dive into the research topics of 'Classifier Calibration'. Together they form a unique fingerprint.

Cite this