MAXENT LOSS: CONSTRAINED MAXIMUM ENTROPY FOR CALIBRATING DEEP NEURAL NETWORKSDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Calibration, Out-of-distribution, Loss function, Machine learning safety, Overconfidence, Robustness, Distribution shifts
TL;DR: A novel loss function involving constraints, used to improve model calibration on OOD data.
Abstract: Miscalibration distorts our interpretation between a model's confidence and correctness, making them unreliable for real-world deployment. In general, we want dependable and meaningful probabilistic estimations of our model's uncertainty, which are essential in real-world applications. This may include inputs that are from out-of-distributions (OOD), which can be widely different from the given training distribution. Motivated by the Principle of Maximum Entropy, we show that -- compared to conventional cross-entropy loss and focal loss -- training neural networks with additional statistical constraints can help improve neural calibration whilst retaining recognition accuracy. We evaluate our method extensively on different augmented and in-the-wild OOD computer vision datasets and show that our Maxent loss achieves state-of-the-art calibration in all cases. Our code will be made available upon acceptance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
6 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview