Abstract: Anomaly detection in video is a challenging task with great application value. Most existing approaches formulate anomaly detection as a reconstruction/prediction problem established on the encoder-decoder structure. However, they suffer from the poor generalization performance when the model is directly applied to an unseen scene. To solve this problem, in this paper, we propose an Adaptive Anomaly Detection Network (AADNet) to realize few-shot scene-adaptive anomaly detection. Our core idea is to learn an adaptive model, which can identify abnormal events without fine-tuning when transferred to a new scene. To this end, in AADNet, a Segments Similarity Measurement (SSM) module is utilized to calculate the cosine distance of different input video segments, based on which the normal segments will be gathered. Meanwhile, to further exploit the information of normal events, we design a novel Relational Scene Awareness (RSA) module to capture the pixel-to-pixel relationship between different segments. By combining the SSM module with RSA, the proposed AADNet becomes much more generative. Extensive experiments on four datasets demonstrate our method can adapt to a new scene effectively without fine-tuning and achieve the state-of-the-art performance.
0 Replies
Loading