Abstract: Deep Neural Networks often make overconfident predictions when encountering out-of-distribution (OOD) data. Previous prototype-based methods significantly improved OOD detection performance by optimizing the representation space. However, practical scenarios present a challenge where OOD samples near class boundaries may overlap with in-distribution samples in the feature space, resulting in misclassification, and few methods have considered the challenge. In this work, we propose a margin-based method that introduces a margin into the common instance-prototype contrastive loss. The margin leads to broader decision boundaries, resulting in better distinguishability of OOD samples. In addition, we leverage learnable prototypes and explicitly maximize prototype dispersion to obtain an improved representation space. We validate the proposed method on several common benchmarks with different scoring functions and architectures. Experiments results show that the proposed method achieves state-of-the-art performance. Code is available at https://github.com/liujunzhuo/MarginOOD.
Loading