Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration

Published: 18 Jan 2025, Last Modified: 18 Jan 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Most machine learning classifiers are designed to output posterior probabilities for the classes given the input sample. These probabilities may be used to make the categorical decision on the class of the sample; provided as input to a downstream system; or provided to a human for interpretation. Evaluating the quality of the posteriors generated by these system is an essential problem which was addressed decades ago with the invention of proper scoring rules (PSRs). Unfortunately, much of the recent machine learning literature uses calibration metrics---most commonly, the expected calibration error (ECE)---as a proxy to assess posterior performance. The problem with this approach is that calibration metrics reflect only one aspect of the quality of the posteriors, ignoring the discrimination performance. For this reason, we argue that calibration metrics should play no role in the assessment of posterior quality. Expected PSRs should instead be used for this job, preferably normalized for ease of interpretation. In this work, we first give a brief review of PSRs from a practical perspective, motivating their definition using Bayes decision theory. We discuss why expected PSRs provide a principled measure of the quality of a system's posteriors and why calibration metrics are not the right tool for this job. We argue that calibration metrics, while not useful for performance assessment, may be used as diagnostic tools during system development. With this purpose in mind, we discuss a simple and practical calibration metric, called calibration loss, derived from a decomposition of expected PSRs. We compare this metric with the ECE and with the expected score divergence calibration metric from the PSR literature and argue, using theoretical and empirical evidence, that calibration loss is superior to these two metrics.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Camera ready version with a few minor edits and the addition of a paragraph discussing the various notions of calibration that appear in the literature (see the first paragraph in section 2.4), as requested by one of the reviewers and the editor. A link to a public github repository with the code (which was submitted as supplementary material in previous versions to preserve anonymity) is now included.
Code: https://github.com/luferrer/expected_cost
Assigned Action Editor: ~Takashi_Ishida1
Submission Number: 3224
Loading