Abstract: Video Anomaly Detection (VAD) is a significant task, which refers to taking a video clip as input and outputting class labels, e.g., normal or abnormal, at the frame level. Wang et al. proposed a method called DSTJiP, which trains the model by solving Decoupled Spatial and Temporal Jigsaw Puzzles and achieves impressive VAD performance. However, the model sometimes fails to detect abnormal human actions where abnormal motions are accompanied by normal motions. The reason is that the model learns representations of little- and non-motion parts of training examples, resulting in being insensitive to abnormal motions. To circumvent this problem, we propose to solve Spatial and Augmented Temporal Jigsaw Puzzles (SATJiP) as an extension of DSTJiP. SATJiP encourages the model to focus on motions by a novel pretext task, enabling it to detect abnormal motions accompanied by normal motions. Experiments conducted on three standard VAD benchmarks demonstrate that SATJiP outperforms the state-of-the-art methods.
Loading