Classifier Calibration

Flach, P. A. (Organiser), Perello Nieto, M. (Organiser), Song, H. (Organiser), Meelis Kull (Organiser), Telmo De Menezes E Silva Filho (Organiser)

Activity: Participating in or organising an event typesParticipation in workshop, seminar, course


Organised as part of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. This tutorial introduces fundamental concepts in classifier calibration and gives an overview of recent progress in the enhancement and evaluation of calibration methods. Participants will learn why some training algorithms produce calibrated probability estimates and others don’t, and how to apply post-hoc calibration techniques in order to improve the probability estimates in theory and in practice, the latter in a Section dedicated to Hands-On explanations. Participants will furthermore learn how to test if a classifier’s outputs are calibrated and how to assess and evaluate probabilistic classifiers using a range of evaluation metrics and exploratory graphical tools. Additionally, participants will obtain a basic appreciation of the more abstract perspective provided by proper scoring rules, and learn about related topics and some open problems in the field.
Period14 Sep 2020
Event typeWorkshop
LocationGhent, Belgium
Degree of RecognitionInternational


  • machine learning
  • classification
  • calibration
  • uncertainty
  • probability
  • confidence
  • estimation