Hiding Objects from Detectors: Exploring Transferrable Adversarial PatternsDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Adversaries in neural networks have drawn much attention since their first debut. While most existing methods aim at deceiving image classification models into misclassification or crafting attacks for specific object instances in the object setection tasks, we focus on creating universal adversaries to fool object detectors and hide objects from the detectors. The adversaries we examine are universal in three ways: (1) They are not specific for specific object instances; (2) They are image-independent; (3) They can further transfer to different unknown models. To achieve this, we propose two novel techniques to improve the transferability of the adversaries: \textit{piling-up} and \textit{monochromatization}. Both techniques prove to simplify the patterns of generated adversaries, and ultimately result in higher transferability.
Keywords: adversarial, object detection
TL;DR: We focus on creating universal adversaries to fool object detectors and hide objects from the detectors.
7 Replies

Loading