Abstract: Anomaly detection focuses on identifying samples that deviate from the norm. Discovering informative representations of normal samples is crucial to detecting anomalies effectively. Recent self-supervised methods have successfully learned such representations by employing prior knowledge about anomalies to create synthetic outliers during training. However, we often do not know what to expect from unseen data in specialized real-world applications. In this work, we address this limitation with our new approach Con$_2$, which leverages symmetries in normal samples to observe the data in different contexts. Con$_2$ clusters representations according to their context and simultaneously aligns their positions to learn an informative representation space that is structured according to the properties of normal data. Anomalies do not adhere to the same structure as normal data, making their representations deviate from the learned context clusters. We demonstrate the benefit of this approach in extensive experiments on specialized medical datasets, outperforming competitive baselines based on self-supervised learning and pretrained models.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Shinichi_Nakajima2
Submission Number: 4911
Loading