Adversarial Examples Defense via Combining Data Transformations and RBF LayersOpen Website

Published: 01 Jan 2021, Last Modified: 15 May 2023PRICAI (2) 2021Readers: Everyone
Abstract: Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks. By adding imperceptible perturbations to the input images, adversarial attack methods can fool CNN models with a high confidence. The main reason is that existing CNN models usually use softmax-like linear classifiers. Recent researches indicate that Radial Basis Function (RBF) network can effectively improve the nonlinearity classification capability and demonstrates robustness against white-box attacks, while data transformations can smooth the classification boundary and show high efficacy for countering black-box attacks. We propose to incorporate data transformations and RBF together to simultaneously enhance the robustness of CNNs against white-box and black-box attacks. However, applying RBF to a very deep CNN will lead to a difficult convergence during training, while data transformations might reduce classification accuracy due to introducing noises. To solve these issues, we further propose a deep supervision strategy and a novel dual loss function. Experiments on two public available datasets demonstrate that applying the proposed methods to the existing CNN models greatly improve their abilities against adversarial attacks while keeping their original recognition performance.
0 Replies

Loading