Conservative Prediction via Data-Driven Confidence Minimization

Published: 03 Jun 2024, Last Modified: 03 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In safety-critical applications of machine learning, it is often desirable for a model to be \textit{conservative}, abstaining from making predictions on ``unknown'' inputs which are not well-represented in the training data. However, detecting unknown examples is challenging, as it is impossible to anticipate all potential inputs at test time. To address this, prior work minimizes model confidence on an auxiliary outlier dataset carefully curated to be disjoint from the training distribution. We theoretically analyze the choice of auxiliary dataset for confidence minimization, revealing two actionable insights: (1) if the auxiliary set contains unknown examples similar to those seen at test time, confidence minimization leads to provable detection of unknown test examples, and (2) if the first condition is satisfied, it is unnecessary to filter out known examples for out-of-distribution (OOD) detection. Motivated by these guidelines, we propose the Data-Driven Confidence Minimization (DCM) framework, which minimizes confidence on an \textit{uncertainty dataset}. We apply DCM to two problem settings in which conservative prediction is paramount -- selective classification and OOD detection -- and provide a realistic way to gather uncertainty data for each setting. In our experiments, DCM consistently outperforms existing selective classification approaches on 4 datasets when tested on unseen distributions and outperforms state-of-the-art OOD detection methods on 12 ID-OOD dataset pairs, reducing FPR (at TPR $95\%$) by $6.3\%$ and $58.1\%$ on CIFAR-10 and CIFAR-100 compared to Outlier Exposure.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready revisions: - better explanation of the intuition behind the method, especially on the analysis of its experimental performance - short discussion on data augmentation in uncertainty quantification and OOD detection context Added top-1 accuracies to main tables. Clarified assumptions on threshold selection and uncertainty set construction. Added a brief discussion on the problem setting of rejecting both ID misclassifications and OOD samples with accompanying references. Elaborated on the theoretical assumptions underlying our models, particularly in relation to the utility and limitations of our propositions about image-space distances. Added clarifications re: WOODS baseline, citations re: the relationship between feature-space distances and semantic relations, removed redundant tables in the appendix (Tables 11 and 12 in the old version).
Assigned Action Editor: ~Yingzhen_Li1
Submission Number: 2101
Loading