Inter-separability and intra-concentration to enhance stochastic neural network adversarial robustness
Abstract: It has been shown that Deep Neural Networks can be easily fooled by adding an imperceptible noise termed as adversarial examples. To address this issue, in this paper, we propose a defense method called Inter-Separability and Intra-Concentration Stochastic Neural Networks (ISIC-SNN). The suggested ISIC-SNN method learns to enlarge between different label representations using label embedding and a designed inter-separability loss. It introduces uncertainty in the features latent space using the variational information bottleneck method and enhances compactness in stochastic features using intra-concentration loss. Finally, it uses dot-product similarity between stochastic feature representations and label embedding to classify features. ISIC-SNN learns in standard training which is much more efficient than adversarial training. Experiments on datasets SVHN, CIFAR-10 and CIFAR-100 demonstrate the superior defensive capability of the proposed method compared to various SNNs defensive methods.
Loading