Generalized Outlier Exposure: Towards a trustworthy out-of-distribution detector without sacrificing accuracy

Jiin Koo, Sungjoon Choi, Sangheum Hwang

Published: 01 Apr 2024, Last Modified: 04 Nov 2025NeurocomputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Despite the remarkable performance of deep neural networks (DNNs), it is often challenging to employ DNNs in safety-critical applications due to their overconfident predictions on even out-of-distribution (OoD) samples. To address this, an OoD detection task was motivated, and one of the OoD detection methods, Outlier Exposure (OE), demonstrated strong performance by leveraging OoD training samples. However, OE and its variants lead to a deterioration in in-distribution (ID) classification performance, and this issue is still unresolved. To this end, we propose Generalized OE (G-OE) that linearly mixes training data from all given samples, including OoD to produce reliable uncertainty estimates. G-OE also includes an effective filtering strategy to reduce the negative effect of OoD samples that are semantically similar to ID samples. We extensively evaluate the performance of G-OE on SC-OoD benchmarks: G-OE improves the performance of OoD detection and ID classification compared to existing OE-based methods.
Loading