Description
Organised as part of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. This tutorial introduces fundamental concepts in classifier calibration and gives an overview of recent progress in the enhancement and evaluation of calibration methods. Participants will learn why some training algorithms produce calibrated probability estimates and others don’t, and how to apply post-hoc calibration techniques in order to improve the probability estimates in theory and in practice, the latter in a Section dedicated to Hands-On explanations. Participants will furthermore learn how to test if a classifier’s outputs are calibrated and how to assess and evaluate probabilistic classifiers using a range of evaluation metrics and exploratory graphical tools. Additionally, participants will obtain a basic appreciation of the more abstract perspective provided by proper scoring rules, and learn about related topics and some open problems in the field.Period | 14 Sept 2020 |
---|---|
Event type | Workshop |
Location | Ghent, BelgiumShow on map |
Degree of Recognition | International |
Keywords
- machine learning
- classification
- calibration
- uncertainty
- probability
- confidence
- estimation
Documents & Links
Related content
-
Student theses
-
Uncertainty aware classification: augmenting classifiers to handle uncertainty
Student thesis: Doctoral Thesis › Doctor of Philosophy (PhD)
-
Research Outputs
-
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers
Research output: Chapter in Book/Report/Conference proceeding › Conference Contribution (Conference Proceeding)
-
Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration
Research output: Chapter in Book/Report/Conference proceeding › Conference Contribution (Conference Proceeding)
-
Non-Parametric Calibration of Probabilistic Regression
Research output: Working paper
-
Distribution Calibration for Regression
Research output: Chapter in Book/Report/Conference proceeding › Conference Contribution (Conference Proceeding)
-
Classifier calibration: a survey on how to assess and improve predicted class probabilities
Research output: Contribution to journal › Article (Academic Journal) › peer-review
-
Beyond Sigmoids : How to obtain well-calibrated probabilities from binary classifiers with beta calibration
Research output: Contribution to journal › Article (Academic Journal) › peer-review