Abstract: Background subtraction forms the basis of many safety applications in airport traffic management, such as the visual conflict warning system. However, deep learning methods often mistakenly identify stationary aircraft as foreground, mainly because they prioritize learning appearance over motion features. This means that stationary aircraft with a similar appearance to moving ones are often incorrectly classified as foreground. To address this issue, a Motion-enhanced Background Subtraction Network (MBSNet) is proposed in this paper. MBSNet is designed to focus more on motion information within an encoder-decoder framework. Firstly, a Motion Augmentation Encoder Module (MAEM) is introduced, which generates a clean background frame without foreground from previous frames. This module compares the background frame with the current frame containing moving objects, indirectly enhancing the motion component in the encoded features. Because targets on the airport ground are relatively sparse, MAEM ensures a clean background image. Secondly, a Motion Accumulation Decoder Module (MADM) is designed, which accumulates motion-augmented features from the current frame and past frames based on feature dissimilarity measurement. Since aircraft exhibit consistent motion patterns, such as continuous straight travel with occasional turns, MADM further enhances the motion component in the accumulated feature vector. Finally, MBSNet is evaluated on the AGVS dataset, and our experiments demonstrate the effectiveness of the proposed method for airport background subtraction.
Loading