Two Is Better Than One: Aligned Representation Pairs for Anomaly Detection

TMLR Paper4911 Authors

22 May 2025 (modified: 27 Aug 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Anomaly detection focuses on identifying samples that deviate from the norm. Discovering informative representations of normal samples is crucial to detecting anomalies effectively. Recent self-supervised methods have successfully learned such representations by employing prior knowledge about anomalies to create synthetic outliers during training. However, we often do not know what to expect from unseen data in specialized real-world applications. In this work, we address this limitation with our new approach, Con2, which leverages prior knowledge about symmetries in normal samples to observe the data in different contexts. Con2 consists of two parts: Context Contrasting clusters representations according to their context, while Content Alignment encourages the model to capture semantic information by aligning the positions of normal samples across clusters. The resulting representation space allows us to detect anomalies as outliers of the learned context clusters. We demonstrate the benefit of this approach in extensive experiments on specialized medical datasets, outperforming competitive baselines based on self-supervised learning and pretrained models and presenting competitive performance on natural imaging benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Shinichi_Nakajima2
Submission Number: 4911
Loading