Abstract: Deep networks for classification are typically trained by maximizing the log likelihood of the training data. However, the conditional probabilities learned in this way are often not well-calibrated and are thus not well-suited for cost-sensitive learning where making different errors incurs different rewards or penalties. In this paper, we propose to directly train neural networks to optimize a cost sensitive loss via Empirical Risk Minimization (ERM). Empirical results show that, with proper initialization, ERM training with cost-sensitive loss outperforms maximum-likelihood training with various form of post-processing on a range of cost-sensitive classification tasks.
Keywords: Deep Learning, Cost-Sensitive Learning, Policy Learning, Classification
TL;DR: Approach cost sensitive classification with deep neural networks by directly learning a policy via empirical risk minimization
4 Replies
Loading