Mean-Shifted Contrastive Loss for Anomaly DetectionDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: anomaly detection
Abstract: Deep anomaly detection methods learn representations that separate between normal and anomalous samples. It was previously shown that the most accurate anomaly detectors can be obtained when powerful externally trained feature extractors (e.g. ResNets pre-trained on ImageNet) are fine-tuned on the training data which consists of normal samples and no anomalies. Although contrastive learning is currently the state-of-the-art in self-supervised anomaly detection, we show that it achieves poor results when used to fine-tune pre-trained feature extractors. We investigate the reason for this collapse, and find that pre-trained feature initialization causes poor conditioning for standard contrastive objectives, resulting in bad optimization dynamics. Based on our analysis, we provide a modified contrastive objective named the \textit{Mean-Shifted Contrastive Loss}. Our method is highly effective and achieves a new state-of-the-art anomaly detection performance on multiple benchmarks including $97.2\%$ ROC-AUC on the CIFAR-10 dataset.
One-sentence Summary: A novel feature adaptation approach for deep anomaly detection.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2106.03844/code)
10 Replies

Loading