Abstract: In this paper, we show that exploiting Generative Adversarial Networks (GANs) to transform nighttime images into daytime representation increases the robustness of pedestrian detection in low-light conditions. Our work aims at first learning the image translation to transfer the style from daytime images to nighttime images with unpaired GAN training. Second, we use our end-to-end trained GAN model to translate night images as a pre-processing step before feeding them into an object detector that is pre-trained on daytime images only. To demonstrate the effectiveness of our translation approach, we conducted experiments on two real-world pedestrian datasets using both one-stage and two-stage object detectors. Our results outperform the baseline in all experiments and show highly competitive detection performance compared with other GAN-based approaches while holding the most lightweight architecture. We believe that our approach is an effective pre-processing first step that helps in bridging the performance gap between day and night at no expense of re-training object detector networks with more night images.
Loading