DMR: Disentangling Marginal Representations for Out-of-Distribution Detection

Published: 01 Jan 2024, Last Modified: 19 May 2025CVPR Workshops 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Out-of-Distribution (OOD) detection is crucial for the reliable deployment of deep-learning applications. When a given input image does not belong to any categories of the deployed classification model, the classification model is expected to alert the user that the predicted outputs might be unreliable. Recent studies have shown that utilizing a large amount of explicit OOD training data is helpful for improving OOD detection performance. However, collecting explicit real-world OOD data is burdensome, and pre-defining all out-of-distribution labels is fundamentally difficult. In this work, we present a novel method, Disentangling Marginal Representations (DMR), that generates artificial OOD training data by extracting marginal features from images of an In-Distribution (ID) training dataset and manipulating these extracted marginal representations. DMR is intuitive and can be used as a realistic solution that does not require any extra real-world OOD data. Moreover, our method can be simply applied to pre-trained classifier networks without affecting the original classification performance. We demonstrate that a shallow rejection network that is trained on the small subset of synthesized OOD training data generated from our method and attachable to the classifier network achieves superior OOD detection performance. With extensive experiments, we show that our proposed method significantly outperforms the state-of-the-art OOD detection methods on the broadly used CIFAR-10 and CIFAR-100 detection benchmark datasets. We also demonstrate that our proposed method can be further improved when combined with existing methods. The source codes are publicly available at https://github.com/ndb796/DMR.
Loading