Abstract: Out-of-distribution (OOD) detection is crucial for the deployment of machine learning models in the open world. While existing OOD detectors are effective in identifying OOD samples that deviate significantly from in-distribution (ID) data, they often come with trade-offs. For instance, deep OOD detectors usually suffer from high computational costs, require tuning hyperparameters, and have limited interpretability, whereas traditional OOD detectors may have a low accuracy on large high-dimensional datasets. To address these limitations, we propose a novel effective OOD detection approach that employs an overlap index (OI)-based confidence score function to evaluate the likelihood of a given input belonging to the same distribution as the available ID samples. The proposed OI-based confidence score function is non-parametric, lightweight, and easy to interpret, hence providing strong flexibility and generality. Extensive empirical evaluations indicate that our OI-based OOD detector is competitive with state-of-the-art OOD detectors in terms of detection accuracy on a wide range of datasets while requiring less computation and memory costs. Lastly, we show that the proposed OI-based confidence score function inherits nice properties from OI (e.g., insensitivity to small distributional variations and robustness against Huber $\epsilon$-contamination) and is a versatile tool for estimating OI and model accuracy in specific contexts.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The changes are highlighted in blue in the updated manuscript.
Itemized changes are listed in our rebuttal to the reviewers’ concerns, including:
**Reviewer CqRc**
1. We have mentioned in Sec.4.2 that similar assumption is also used in [\*]. We also added in the “**extra information setting**” paragraph that our approach is comparable to outlier exposure.
2. We have added means and standard deviations (with 10 runs) to the baseline performance to account for fluctuations in the results in Tables 2 & 5.
3. We have clarified the terminology about the method being non-parametric in the introduction and mentioned that we are not saying that the method does not have hyper-parameters that can be used for tuning in footnote 1.
4. We have added further clarifications in Sec. 4.1 regarding using baseline methods.
**Reviewer QcNM**
1. We have included the comparison with [2] in Sec. 4.2 and the discussion of [1] in Sec. 2.1.
2. We have added the above discussion in the limitation section in Sec. 6.2.
3. We have included the above results on image corruptions in Sec. 4.2.
4. We have discussed the selective classification-OOD detection problem setup in Sec. 6.1.
**Reviewer KgHs**
1. We have highlighted the relevant paragraph with the title “The OI-Based Confidence Score Function” for further clarification.
2. We have further clarified the caption in Figure 1.
3. We have further clarified corollary 3.4.
Assigned Action Editor: ~Yu-Xiong_Wang1
Submission Number: 4386
Loading