BSUV-Net: A Fully-Convolutional Neural Network for Background Subtraction of Unseen Videos
Abstract: Background subtraction (BGS) is a fundamental video processing task which is a key
component of many applications. Deep learning-based supervised algorithms achieve very good performance
in BGS, however, most of these algorithms are optimized for either a specific video or a group of videos, and
their performance decreases dramatically when applied to unseen videos. Recently, several papers addressed
this problem and proposed video-agnostic supervised BGS algorithms. However, nearly all of the data
augmentations used in these algorithms are limited to the spatial domain and do not account for temporal
variations that naturally occur in video data. In this work, we introduce spatio-temporal data augmentations
and apply them to one of the leading video-agnostic BGS algorithms, BSUV-Net. We also introduce a new
cross-validation training and evaluation strategy for the CDNet-2014 dataset that makes it possible to fairly
and easily compare the performance of various video-agnostic supervised BGS algorithms. Our new model
trained using the proposed data augmentations, named BSUV-Net 2.0, significantly outperforms state-of-theart algorithms evaluated on unseen videos of CDNet-2014. We also evaluate the cross-dataset generalization
capacity of BSUV-Net 2.0 by training it solely on CDNet-2014 videos and evaluating its performance on
LASIESTA dataset. Overall, BSUV-Net 2.0 provides a ∼5% improvement in the F-score over state-of-theart methods on unseen videos of CDNet-2014 and LASIESTA datasets. Furthermore, we develop a real-time
variant of our model, that we call Fast BSUV-Net 2.0, whose performance is close to the state of the art.
Loading