Abstract
Many different metrics are used in machine learning and data mining to build and evaluate models. However, there is no general theory of machine learning metrics, that could answer questions such as: When we simultaneously want to optimise two criteria, how can or should they be traded off? Some metrics are inherently independent of class and misclassification cost distributions, while other are not -- can this be made more precise? This paper provides a derivation of ROC space from first principles through 3D ROC space and the skew ratio, and redefines metrics in these dimensions. The paper demonstrates that the graphical depiction of machine learning metrics by means of ROC isometrics gives many useful insights into the characteristics of these metrics, and provides a foundation on which a theory of machine learning metrics can be built.
Translated title of the contribution | The geometry of ROC space: understanding machine learning metrics through ROC isometrics |
---|---|
Original language | English |
Title of host publication | Unknown |
Publisher | AAAI Press |
Pages | 194 - 201 |
Number of pages | 7 |
ISBN (Print) | 1577351894 |
Publication status | Published - Jan 2003 |