Out-Of-Distribution Detection With Smooth Training

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: out-of-distribution dection
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Detecting out-of-distribution (OOD) inputs is important for ensuring the safe deployment of machine learning models in real-world scenarios. The primary factor impacting OOD detection is the neural network's overconfidence, where a trained neural network tends to make overly confident predictions for OOD samples. A naive solution to mitigate the overconfidence problem of neural networks is label smoothing. However, our experimental observations show that simply using label smoothing doesn't work. We believe that this is because label smoothing is applied to the original ID samples, which is the opposite of the goal of OOD detection (high confidence for ID samples and low confidence for OOD samples). To this end, we propose a new training strategy: smooth training (SMOT) where label smoothing is applied to the perturbed inputs. During the smooth training process, input images are masked with random-sized label-related regions, and their labels are softened to varying degrees depending on the size of masked regions. With this training approach, we make the prediction confidence of the neural network closely related to the number of input image features belonging to a known class, thus allowing the neural network to produce highly distinguishable confidence scores between in- and out-of-distribution data. Extensive experiments are conducted on diverse OOD detection benchmarks, showing the effectiveness of SMOT.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1837
Loading