BSCGAN: Deep Background Subtraction with Conditional Generative Adversarial Networks

Published: 01 Jan 2018, Last Modified: 06 Mar 2025ICIP 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper proposes a deep background subtraction method based on conditional Generative Adversarial Network (cGAN). The proposed model consists of two successive networks: generator and discriminator. The generator learns the mapping from the observing input (i.e., image and background), to the output (i.e., foreground mask). Then, the discriminator learns a loss function to train this mapping by comparing real foreground (i.e., ground-truth) and fake foreground (i.e., predicted output) with observing the input image and background. Evaluating the model performance with two public datasets, CDnet 2014 and BMC, shows that the proposed model outperforms the state-of-the-art methods.
Loading