Beyond Accuracy: Controlling Broad Error Types in Selective Classification

Published: 13 Apr 2026, Last Modified: 13 Apr 2026Calibration for Modern AI @ AISTATS 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Selective Prediction, Abstention, Reliable Machine Learning, Metric-Specific Guarantees, Medical Imaging
TL;DR: We introduce a Selective Classification framework that provides guarantees on specific error metrics (like false negative rate), enabling domain-aware trade-offs beyond accuracy.
Abstract: Selective prediction allows classifiers to abstain on uncertain inputs, improving reliability in high-stakes applications. Existing frameworks, such as Selection with Guaranteed Risk (SGR), provide tight statistical guarantees on overall misclassification risk, but this is inadequate for many applications—particularly in medicine—where guarantees on specific error types, such as false positives, false negatives, or positive predictive value (PPV), are required. We propose a general framework for selective binary classification with metric-specific guarantees. Our theory extends risk control from 0/1 loss to arbitrary binary losses and derives new high-probability bounds for the corresponding metrics under abstention. We instantiate the framework with neural networks and validate it on public imaging datasets, showing that our metric-aware selective classifiers better capture domain-specific trade-offs than accuracy-based approaches. Code and reproducibility artifacts are released at https://github.com/EmilienJemelen/selective-classification.
Submission Number: 6
Loading