ADPretrain: Advancing Industrial Anomaly Detection via Anomaly Representation Pretraining

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Anomaly Detection, Anomaly Representation Pretraining
TL;DR: We propose a novel AD pretraining framework specially designed for learning robust and discriminative pretrained representations for industrial anomaly detection.
Abstract: The current mainstream and state-of-the-art anomaly detection (AD) methods are substantially established on pretrained feature networks yielded by ImageNet pre- training. However, regardless of supervised or self-supervised pretraining, the pretraining process on ImageNet does not match the goal of anomaly detection (i.e., pretraining in natural images doesn’t aim to distinguish between normal and abnormal). Moreover, natural images and industrial image data in AD scenarios typically have the distribution shift. The two issues can cause ImageNet-pretrained features to be suboptimal for AD tasks. To further promote the development of the AD field, pretrained representations specially for AD tasks are eager and very valuable. To this end, we propose a novel AD representation learning framework specially designed for learning robust and discriminative pretrained representa- tions for industrial anomaly detection. Specifically, closely surrounding the goal of anomaly detection (i.e., focus on discrepancies between normals and anoma- lies), we propose angle- and norm-oriented contrastive losses to maximize the angle size and norm difference between normal and abnormal features simulta- neously. To avoid the distribution shift from natural images to AD images, our pretraining is performed on a large-scale AD dataset, RealIAD. To further alle- viate the potential shift between pretraining data and downstream AD datasets, we learn the pretrained AD representations based on the class-generalizable repre- sentation, residual features. For evaluation, based on five embedding-based AD methods, we simply replace their original features with our pretrained represen- tations. Extensive experiments on five AD datasets and five backbones consis- tently show the superiority of our pretrained features. The code is available at https://github.com/xcyao00/ADPretrain.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 16706
Loading