A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Revisit the characterization of the Area Under the Risk Coverage Curve (AURC)
Abstract: The selective classifier (SC) has been proposed for rank based uncertainty thresholding, which could have applications in safety critical areas such as medical diagnostics, autonomous driving, and the justice system. The Area Under the Risk-Coverage Curve (AURC) has emerged as the foremost evaluation metric for assessing the performance of SC systems. In this work, we present a formal statistical formulation of population AURC, presenting an equivalent expression that can be interpreted as a reweighted risk function. Through Monte Carlo methods, we derive empirical AURC plug-in estimators for finite sample scenarios. The weight estimators associated with these plug-in estimators are shown to be consistent, with low bias and tightly bounded mean squared error (MSE). The plug-in estimators are proven to converge at a rate of $\mathcal{O}(\sqrt{\ln(n)/n})$ demonstrating statistical consistency. We empirically validate the effectiveness of our estimators through experiments across multiple datasets, model architectures, and confidence score functions (CSFs), demonstrating consistency and effectiveness in fine-tuning AURC performance.
Lay Summary: When models are used in high-stakes situations such as healthcare, driving, or legal decisions, it is important not only to use the model for making predictions, but also to know when to trust their answers and when to back down. Selective classifiers are systems designed to do just that: only make a prediction when they’re confident, and otherwise remain silent to avoid costly mistakes. But how can we measure how well these systems balance safety with making useful decisions? In our research, we focus on an evaluation metric called the Area Under the Risk-Coverage Curve (AURC), which captures how effectively a system manages the trade-off between accuracy and caution. We developed a novel, statistical method to interpret and estimate this metric, even when data is limited. Our approach is not only statistically sound—becoming more accurate as more data is collected—but also practical. We tested our method on various datasets and models, showing it works reliably. This research helps make future AI systems more dependable when uncertainty really matters.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/han678/AsymptoticAURC
Primary Area: Applications->Computer Vision
Keywords: Selective classifier; Area Under the Risk-Coverage Curve
Submission Number: 6136
Loading