Is ReLU Adversarially Robust?

Published: 03 Jul 2023, Last Modified: 03 Jul 2023LXAI @ ICML 2023 Regular Deadline PosterEveryoneRevisionsBibTeX
Keywords: adversarial robustness, deep learning
TL;DR: This paper investigates the role of rectified linear unit (ReLU) activation functions in generating adversarial examples, proposes a modified version of ReLU for improved robustness, and further enhances its performance through adversarial training.
Abstract: The efficacy of deep learning models has been called into question by the presence of adversarial examples. Addressing the vulnerability of deep learning models to adversarial examples is crucial for ensuring their continued development and deployment. In this work, we focus on the role of rectified linear unit (ReLU) activation functions in the generation of adversarial examples. ReLU functions are commonly used in deep learning models because they facilitate the training process. However, our empirical analysis demonstrates that ReLU functions are not robust against adversarial examples. We propose a modified version of the ReLU function, which improves robustness against adversarial examples. Our results are supported by an experiment, which confirms the effectiveness of our proposed modification. Additionally, we demonstrate that applying adversarial training to our customized model further enhances its robustness compared to a general model.
Submission Type: Archival (to be published in the Journal of LatinX in AI (LXAI) Research)
Submission Number: 5
Loading