Unsupervised Anomaly Detection in Tabular Data with Test-time Contrastive Learning

ICLR 2026 Conference Submission17077 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Unsupervised Anomaly Detection, Data shift, Test-Time Training, Contrastive Learning
Abstract: Unsupervised anomaly detection methods typically learn the feature patterns of normal samples during training, subsequently identifying samples that deviate from the learned patterns as anomalies during testing. However, most existing methods assume that the normal patterns in the test set are similar to those in the training set, ignoring the fact that a limited number of training samples may not cover all possible normal patterns. As a result, when the normal patterns in the test set differ from those in the training set, the model may struggle to distinguish whether these samples are normal or anomalous, leading to incorrect predictions. To address this issue, we propose a novel Test-time Contrastive learning approach for unsupervised Anomaly Detection in tabular data (namely TCAD). Specifically, TCAD consists of two core stages: Collaborative Dual-task Training and Test-Time Contrastive Learning. In training, Collaborative Dual-task Training uses two self-supervised tasks to capture multi-level features of normal samples and model normal patterns. At test time, Test-Time Contrastive Learning assigns pseudo labels to high-confidence samples and updates the model in two ways: First, it facilitates model adaptation to pseudo-normal samples while preventing overfitting to pseudo-abnormal ones. Second, it employs a KNN-based contrastive strategy to align pseudo-normal samples with the training distribution while pushing pseudo-abnormal samples away. By combining robust normal pattern modeling with iterative test-time adaptation, TCAD improves anomaly discrimination, especially under distribution shifts between training and test sets. We construct distribution shifts on 15 widely used tabular datasets, and the results show that TCAD achieves state-of-the-art performance, outperforming the best baseline by 4.19% in AUC-ROC, 3.15% in AUC-PR, and 6.64% in F1 score.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17077
Loading