Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore

Published: 01 Jan 2022, Last Modified: 07 May 2024ICSI (2) 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural networks are fragile because adversarial examples can readily assault them. As a result of the current scenario, academics from various countries have paid close attention to adversarial examples: many research outcomes, e.g., adversarial and defensive approaches and algorithms. However, numerous people are still baffled about how adversarial examples affect neural networks. We present hypotheses and devise extensive experiments to acquire more information about adversarial examples to verify this notion. By experiments, we investigate the neural network’s sensitivity to adversarial examples in diverse aspects, e.g., model architectures, activation functions, and loss functions. The consequence of the experiment shows that adversarial examples are closely related to them. Peculiarly, sensitivity’s property could help us distinguish the adversarial examples from the data set. This work will inspire the research of adversarial examples detection.
Loading