Adversarial attacks in computer vision: a survey

Published: 01 Jan 2024, Last Modified: 12 May 2025J. Membr. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning, as an important topic of artificial intelligence, has been widely applied in various fields, especially in computer vision applications, such as image classification and object detection, which have made remarkable advancements. However, it has been demonstrated that deep neural networks (DNNs) suffer from adversarial vulnerability. For the image classification task, the carefully crafted perturbations are added to the clean images, and the resulting adversarial examples are able to change the prediction results of DNNs. Hence, the presence of adversarial examples presents a significant obstacle to the security of DNNs in practical applications, which has garnered considerable attention from researchers in related fields. Recently, a number of studies have been conducted on adversarial attacks. In this survey, the relevant concepts and background are first introduced. Then, based on computer vision tasks, we systematically review the existing adversarial attack methods and research progress. Finally, several common defense methods are summarized, and some challenges are discussed.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview