Robustness of unsupervised methods for image surface-anomaly detection

Published: 01 Jan 2025, Last Modified: 15 May 2025Pattern Anal. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Surface-anomaly detection is a critical challenge in ensuring product quality, as defects can pose safety risks and diminish product lifespan. A significant challenge in this domain is the limited availability of anomalous samples which makes training supervised models impractical. In response, unsupervised deep-learning-based methods have attracted significant attention in recent years, as they do not require anomalous samples for training. Such methods assume that during dataset curation all anomalous samples can be identified and subsequently removed from the training set. In practice, however, identifying all anomalous samples without any false negatives is rarely possible, either due to the human errors or due to the ambiguity in what is considered a defect and what is not. In this paper, we address the need to measure the robustness of the unsupervised surface-anomaly detection methods as one of the most important performance metrics. To this end, we propose a robustness measure that describes the sensitivity of an unsupervised method to the presence of anomalous data in the training set. We extensively evaluate seven well established unsupervised methods that follow different anomaly detection paradigms on four diverse datasets and analyze the results. We show that most of the analyzed methods are fairly robust to low percentages of anomalous samples in the training set, with some of them retaining the near-baseline performance even when that percentage grows fairly large.
Loading