Abstract: Background subtraction in videos is a highly challenging task by definition, as it lays on a pixel-wise classification level. Therefore, great attention to detail is essential. In this paper, we follow the success of Deep Learning in Computer Vision and present an end-to-end system for background subtraction in videos. Our model is able to track temporal changes in a video sequence by applying 3D convolutions to the most recent frames of the video. Thus, no background model is needed to be retained and updated. In addition, it can handle multiple scenes without further fine-tuning on each scene individually. We evaluate our system on the largest dataset for change detection, CDnet, with over 50 videos which span across 11 categories. Further evaluation is performed in the ESI dataset which features extreme and sudden illumination changes. Our model surpasses the state-of-the-art on both datasets according to the average ranking of the models over a wide range of metrics.
0 Replies
Loading