Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations
Abstract: —Deep neural networks (DNNs) have achieved impressive performance on handling computer vision problems,
however, it has been found that DNNs are vulnerable to adversarial examples. For such reason, adversarial perturbations
have been recently studied in several respects. However, most
previous works have focused on image classification tasks, and
it has never been studied regarding adversarial perturbations
on Image-to-image (Im2Im) translation tasks, showing great
success in handling paired and/or unpaired mapping problems
in the field of autonomous driving and robotics. This paper
examines different types of adversarial perturbations that can
fool Im2Im frameworks for autonomous driving purpose. We
propose both quasi-physical and digital adversarial perturbations
that can make Im2Im models yield unexpected results. We then
empirically analyze these perturbations and show that they generalize well under both paired for image synthesis and unpaired
settings for style transfer. We also validate that there exist
some perturbation thresholds over which the Im2Im mapping
is disrupted or impossible. The existence of these perturbations
reveals that there exist crucial weaknesses in Im2Im models.
Lastly, we show that our methods illustrate how perturbations
affect the quality of outputs, pioneering the improvement of the
robustness of current SOTA networks for autonomous driving
Loading