Investigating the robustness of multi-view detection to current adversarial patch threats

Published: 01 Jan 2022, Last Modified: 08 Nov 2024ATSIP 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As deep neural networks are increasingly integrated in our daily lives, the safety and reliability of their results has become of paramount importance. However, the vulnerability of these networks to adversarial attacks are an obstacle to wider adoption, especially in safety-critical applications: A malicious actor can manipulate the results of a deep neural network by adding a nearly imperceptible noise to the input. And adversarial patch attacks make real-life implementations of these threats easier. Therefore, studying these attacks has become a rapidly growing field of artificial intelligence research. One aspect of this research is studying the behavior of patch attacks in various scenarios to understand their inner workings and find novel method to secure deep neural networks. In this paper, we examine the effectiveness of existing adversarial patch attacks against a multi-view detector. To this aim, we propose an evaluation framework where an adversarial patch is trained against a single view of a multi-view dataset and transfer the patch to the other views of the dataset with the use of perspective geometric transforms. Our results confirm that current single-view adversarial patches struggle against multi-view detectors, especially when only few views are attacked. These observations suggest that multi-view detection methods may be a step forward towards reliable and safe AI.
Loading