Mitigating Overconfidence in Out-of-Distribution Detection by Capturing Extreme Activations

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: OOD Detection, Overconfidence, Data Shift
TL;DR: In order to solve the issue of overconfidence in OOD detection, we propose a new solution involving the capture of extreme activations in the neural network used for post-hoc OOD detection.
Abstract: Detecting out-of-distribution (OOD) instances is crucial for the reliable deployment of machine learning models in real-world scenarios. OOD inputs are commonly expected to cause a more uncertain prediction in the primary task; however, there are OOD cases for which the model returns a highly confident prediction. This phenomenon, denoted as "overconfidence", presents a challenge to OOD detection. Specifically, theoretical evidence indicates that overconfidence is an intrinsic property of certain neural network architectures, leading to poor OOD detection. In this work, we address this issue by measuring extreme activation values in the penultimate layer of neural networks and then leverage this proxy of overconfidence to improve on several OOD detection baselines. We test our method on a wide array of experiments spanning synthetic data and real-world data, tabular and image datasets, multiple architectures such as ResNet and Transformer, different training loss functions, and include the scenarios examined in previous theoretical work. Compared to the baselines, our method often grants substantial improvements, with double-digit increases in OOD detection AUC, and it does not damage performance in any scenario.
List Of Authors: Azizmalayeri, Mohammad and Abu-Hanna, Ameen and Cin\'a, Giovanni
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/mazizmalayeri/CEA
Submission Number: 209
Loading