A Simple Stochastic Neural Network for Improving Adversarial Robustness

Published: 01 Jan 2023, Last Modified: 29 Aug 2024ICME 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The vulnerability of deep learning algorithms to malicious attack has garnered significant attention from researchers in recent years. In order to provide more reliable services for safety-sensitive applications, prior studies have introduced Stochastic Neural Networks (SNNs) as a means of improving adversarial robustness. However, existing SNNs are not designed from the perspective of optimizing the adversarial decision boundary and rely on complex and expensive adversarial training. To find an appropriate decision boundary, we propose a simple and effective stochastic neural network that incorporates a regularization term into the objective function. Our approach maximizes the variance of the feature distribution in low-dimensional space and forces the feature direction to align with the eigenvectors of the covariance matrix. Due to no need of adversarial training, our method requires lower computational cost and does not sacrifice accuracy on normal examples, making it suitable for use with a variety of models. Extensive experiments against various well-known white- and black-box attacks show that our proposed method outperforms state-of-the-art methods.
Loading