Adversarial Defense Via Data Dependent Activation Function and Total Variation MinimizationDownload PDF

27 Sept 2018 (modified: 13 Apr 2025)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation function remarkably improves both classification accuracy and stability to adversarial perturbations. Together with the total variation minimization of adversarial images and augmented training, under the strongest attack, we achieve up to 20.6%, 50.7%, and 68.7% accuracy improvement w.r.t. the fast gradient sign method, iterative fast gradient sign method, and Carlini-WagnerL2attacks, respectively. Our defense strategy is additive to many of the existing methods. We give an intuitive explanation of our defense strategy via analyzing the geometry of the feature space. For reproducibility, the code will be available on GitHub.
Keywords: Adversarial Attack, Adversarial Defense, Data Dependent Activation Function, Total Variation Minimization
TL;DR: We proposal strategies for adversarial defense based on data dependent activation function, total variation minimization, and training data augmentation
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/adversarial-defense-via-data-dependent/code)
13 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview