Abstract: Detecting whether examples belong to a given in-distribution or are out-of-distribution (OOD) requires identifying features that are specific to the in-distribution. In the absence of labels, these features can be learned by self-supervised representation learning techniques under the generic assumption that the most abstract features are those which are statistically most over-represented in comparison to other distributions from the same domain. This work shows that self-distillation of the in-distribution training set together with contrasting against negative examples derived from shifting transformation of auxiliary data strongly improves OOD detection. We find that this improvement depends on how the negative samples are generated, with the general observation that negative samples that keep the statistics of lower level features but change the global semantics result in higher detection accuracy on average. For the first time, we introduce a sensitivity score using which we can optimise negative sampling in a systematic way in an unsupervised setting. We demonstrate the efficiency of our approach across a diverse range of OOD detection problems, setting new benchmarks for unsupervised OOD detection in the visual domain.
Loading