Confession Networks: Boosting Accuracy and Improving Confidence in Classification

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Neural Networks, Loss function, Confidence, Computer Vision, Confidence bound
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: In this paper, we propose a novel method for measuring the confidence of neural networks in classification problems.
Abstract: In this paper, we propose a novel method for measuring the confidence of neural networks in classification problems. There are existing statistical approaches to measure neural network confidence for classification. However, in this paper, we propose a new loss function such that the neural network signals the amount of confidence it has for its prediction, independent of the prediction itself. The first goal of this paper is to design an appropriate loss function to output a confidence measure along with classification scores for neural networks. A second goal is to examine whether such a loss function can improve network performance. There are many applications where a confidence measure is important, including autonomous driving to ensure that the predictions relating to the area around the vehicle are correct or in important medical diagnostic decisions. We demonstrate that the proposed approach both improves prediction accuracy and also provides a valuable output for gauging the confidence of the prediction.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3878
Loading