Abstract: While deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, their vulnerability to adversarial attacks limits
their widespread deployment for safety-critical applications. Alongside other adversarial defense approaches
being investigated, there has been a very recent interest
in improving adversarial robustness in deep neural networks through the introduction of perturbations during the
training process. However, such methods leverage fixed,
pre-defined perturbations and require significant hyperparameter tuning that makes them very difficult to leverage in a general fashion. In this study, we introduce
Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of
deep neural networks. More specifically, we introduce
novel perturbation-injection modules that are incorporated
at each layer to perturb the feature space and increase
uncertainty in the network. This feature perturbation is
performed at both the training and the inference stages.
Furthermore, inspired by the Expectation-Maximization, an
alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively. Experimental results on CIFAR-10 and CIFAR-100
datasets show that the proposed Learn2Perturb method can
result in deep neural networks which are 4-7% more robust
on l∞ FGSM and PDG adversarial attacks and significantly
outperforms the state-of-the-art against l2 C&W attack and
a wide range of well-known black-box attacks.
Loading