Attacking Object Detectors Without Changing the Target ObjectOpen Website

23 Feb 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: Object detectors, such as Faster R-CNN and YOLO, have numerous applications, including in some critical systems, e.g., self-driving cars and unmanned aerial vehicles. Their vulnerabilities have to be studied thoroughly before deploying them in critical systems to avoid irrecoverable loss caused by intentional attacks. Researchers have proposed some methods to craft adversarial examples for studying security risk in object detectors. All these methods require modifying pixels inside target objects. Some modifications are substantial and target objects are significantly distorted. In this paper, an algorithm which derives an adversarial signal placing around the border of target objects to fool objector detectors is proposed. Computationally, the algorithm seeks a border around target objects to mislead Faster R-CNN to produce a very large bounding box and finally decease its confidence to target objects. Using stop sign as a target object, adversarial borders with four different sizes are generated and evaluated on 77 videos, including five in-car videos for digital attacks and 72 videos for physical attacks. The experimental results show that adversarial border can effectively fool Faster R-CNN and YOLOv3 digitally and physically. In addition, the experimental results on YOLOv3 indicate that adversarial border is transferable, which is vital for black-box attack.
0 Replies

Loading