Keywords: OOD detection; Safety in Machine Learning
TL;DR: We propose a new ood detection method by fune-tuning models by sharpness aware minimization.
Abstract: The out-of-distribution (OOD) detection task is crucial for the real-world deployment of machine learning models. In this paper, we propose to study the problem from the perspective of Sharpness-aware Minimization (SAM). Compared with SGD, SAM can better improve model performance and generalization ability, and this is closely related to OOD detection~\citep{vaze2021open}. Therefore, instead of using SGD, we fine-tune the model with SAM, and observe that the distributions of in-distribution (ID) data and OOD data are pushed far away from each other. Through further analysis, concrete theoretical analysis has been provided to explain such observation. Besides, with our carefully designed loss, the fine-tuning process is very time friendly. Usually, the OOD performance improvement can be observed after fine-tuning the model within 1 epoch. Moreover, our method is very flexible and be used to improve the performance of different OOD detection methods.
The extensive experiments have demonstrated that our method achieves \emph{state-of-the-art} performance on widely-used OOD benchmarks across different CNN architectures. Extensive ablation studies and analyses are discussed to support the strong empirical results.
Supplementary Material: pdf
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3501
Loading