We study the design of cost-sensitive learning algorithms with example-dependent costs, when cost matrices for each example are given both during training and test. The approach is based on the empirical risk minimization framework, where we replace the standard loss function by a combination of surrogate losses belonging to the family of proper losses. The actual contribution of each example to the risk is then given by a loss that depends on the cost matrix for the specific example. We then evaluate the use of such example-dependent loss functions in real-world binary and multiclass problems, namely credit risk assessment and musical genre classification. Using different neural network architectures, we show that with the appropriate choice of the example-dependent losses, we can outperform conventional cost-sensitive methods in terms of total cost, making a more efficient use of cost information during training and test as compared to existing discriminative approaches.
|Title of host publication||Second International Workshop on Learning with Imbalanced Domains|
|Subtitle of host publication||Theory and Applications, 10 September 2018, ECML-PKDD, Dublin, Ireland|
|Number of pages||15|
|Publication status||Published - 5 Nov 2018|
|Name||Proceedings of Machine Learning Research|
- Proper losses
- Bregman divergences
Hepburn, A., McConville, R., Santos-Rodriguez, R., Cid-Sueiro, J., & García-García, D. (2018). Proper Losses for Learning with Example-Dependent Costs. In Second International Workshop on Learning with Imbalanced Domains: Theory and Applications, 10 September 2018, ECML-PKDD, Dublin, Ireland (pp. 52-66). [hepburn18a] (Proceedings of Machine Learning Research; Vol. 94). PMLR.