Dual Memory Networks Guided Reverse Distillation for Unsupervised Anomaly Detection

Published: 01 Jan 2024, Last Modified: 15 Jan 2025ACCV (6) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Visual anomaly detection, which is essential for industrial applications, is typically framed as a one-class classification assignment. Recent techniques employing the teacher-student framework for this task have proven effective in both accuracy and processing time. However, they often assume that real-world anomalies are uncommon, emphasizing anomaly-free data while neglecting the importance of aberrant data. We contend that such a paradigm is suboptimal as it fails to differentiate between regular and irregular situations adequately. To overcome this issue, we proposed a novel Dual Memory Guided Reverse Distillation (DM-GRD) framework to learn feature representations for both standard and abnormal data. Specifically, to obtain anomalous patterns, original images are first augmented with a simple Fourier transformation followed by Perlin noise. A teacher network then randomly receives arbitrary images to extract high-level features. To combat “forgettin” and “over-generalization’ difficulties in a student network, two memory banks are introduced to independently store typical and atypical features while maximizing the distance margins between them. Next, a multi-scale feature fusion module is trained to integrate valuable information from the memory banks. Finally, a student network ingests this data to match the instructor network for the same images. Experiments on three industrial benchmark datasets reveal that DM-GRD outperforms current state-of-the-art memory bank and knowledge distillation alternatives, showcasing the robust generalization capability of the proposed framework. The code is publicly available at https://github.com/SKKUAutoLab/DM-GRD.
Loading