Socrates Loss: Unifying Confidence Calibration and Classification by Leveraging the Unknown

Published: 10 Apr 2026, Last Modified: 10 Apr 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep neural networks, despite their high accuracy, often exhibit poor confidence calibration, limiting their reliability in high-stakes applications. Current ad-hoc confidence calibration methods attempt to fix this during training but face a fundamental trade-off: two-phase training methods achieve strong classification performance at the cost of training instability and poorer confidence calibration, while single-loss methods are stable but underperform in classification. This paper addresses and mitigates this stability-performance trade-off. We propose Socrates Loss, a novel, unified loss function that explicitly leverages uncertainty by incorporating an auxiliary unknown class, whose predictions directly influence the loss function and a dynamic uncertainty penalty. This unified objective allows the model to be optimized for both classification and confidence calibration simultaneously, without the instability of complex, scheduled losses. We provide theoretical guarantees that our method regularizes the model to prevent miscalibration and overfitting. Across four benchmark datasets and multiple architectures, our comprehensive experiments demonstrate that Socrates Loss consistently improves training stability while achieving more favorable accuracy-calibration trade-off, often converging faster than existing methods.
Submission Type: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=S9DGZaDYJP
Changes Since Last Submission: Small rewording in the last paragraph of the Introduction, last paragraph of the Abstract, and in the Conclusion to improve readability.
Video: https://youtu.be/7WuSkC-aWW8?si=9fgq5ZN7euIyGZGU
Code: https://github.com/sandruskyi/SocratesLoss
Assigned Action Editor: ~Jose_Dolz1
Submission Number: 6786
Loading