Keywords: out-of-distribution detection, prior, intra-channel activation pattern
Abstract: Out-of-distribution detection (OOD) is a crucial technique for deploying machine learning models in the real world to handle the unseen scenarios. Compared to standard classification tasks, OOD detection presents significant challenges due to the unpredictable nature and inherent difficulty in collecting OOD data. Consequently, a natural solution is to develop priors that are as diverse as possible, effectively characterizing the features of OOD data. In this paper, we first propose a simple yet effective Neural Activation Prior (NAP) for OOD detection.Our prior is based on a key observation that, for a channel before the pooling layer of a fully trained neural network, the probability of a few neurons being activated with a large response by an in-distribution (ID) sample is significantly higher than that by an OOD sample. An intuitive explanation is that for a model fully trained on ID dataset, each channel would play a role in detecting a certain pattern in the ID dataset, and a few neurons can be activated with a large response when the pattern is detected in an input sample. Then, an effective scoring function based on this prior is proposed to highlight the role of these strongly activated neurons in OOD detection. Our approach is plug-and-play and does not lead to any performance degradation on ID data classification and requires no extra training or statistics from training or external datasets. To the best of our knowledge, our method is the first to exploit intra-channel activation pattern information, contributing to its orthogonality to existing approaches and allowing it to be effectively combined with them in various applications. Furthermore, we conduct an elegant oracle experiment to validate the rationale behind our proposed scoring function. Extensive experimental results demonstrate the effectiveness of our method. Moreover, our approach can significantly boost the performance when integrated with most existing methods, showcasing the unique attributes of the proposed prior.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5900
Loading