Projects per year
This paper gives an overview of some ways in which our understanding of performance evaluation measures for machine-learned classifiers has improved over the last twenty years. I also highlight a range of areas where this understanding is still lacking, leading to ill-advised practices in classifier evaluation. This suggests that in order to make further progress we need to develop a proper measurement theory of machine learning. I then demonstrate by example what such a measurement theory might look like and what kinds of new results it would entail. Finally, I argue that key properties such as classification ability and data set difficulty are unlikely to be directly observable, suggesting the need for latent-variable models and causal inference.
|Title of host publication||Proceedings of the AAAI Conference on Artificial Intelligence|
|Number of pages||7|
|Publication status||Published - 17 Jul 2019|
|Event||AAAI Conference on Artificial Intelligence - Hilton Hawaiian Village, Honolulu, United States|
Duration: 27 Jan 2019 → 1 Feb 2019
|Conference||AAAI Conference on Artificial Intelligence|
|Period||27/01/19 → 1/02/19|
- Jean Golding
FingerprintDive into the research topics of 'Performance Evaluation in Machine Learning: The Good, the Bad, the Ugly, and the Way Forward'. Together they form a unique fingerprint.
- 1 Finished
1/11/18 → 30/04/21