The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms

Published: 17 Nov 2020, Last Modified: 06 Nov 2024arXivEveryoneCC BY-NC-SA 4.0
Abstract: With advancements in computer vision, network security, natural language processing, and related domains, deep learning technologies have begun to reveal certain security vulnerabilities. Existing deep learning algorithms often struggle to capture the intrinsic characteristics of data, rendering them ineffective when confronted with malicious inputs. In response to the current security challenges faced by deep learning, this paper explores the issue of adversarial examples, categorizing existing black-box and white-box attack and defense methods. It provides a concise overview of recent applications of adversarial examples across various scenarios, compares several defense techniques, and concludes by summarizing current research challenges and forecasting future developments in this field. The paper also delves into common white-box attack methods, offering a comparative analysis of black-box and white-box attack similarities and differences. Additionally, it discusses corresponding defense strategies and evaluates their efficacy against both black-box and white-box attacks.
Loading