SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial TrainingDownload PDF

Published: 31 Oct 2022, Last Modified: 13 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Spiking Neural Networks, Neural Coding, Perturbation Analysis
TL;DR: Experimental and theoretical insights about the robustness of spiking neural networks motivate a robust training scheme.
Abstract: Spiking neural networks (SNNs) are promising to be widely deployed in real-time and safety-critical applications with the advance of neuromorphic computing. Recent work has demonstrated the insensitivity of SNNs to small random perturbations due to the discrete internal information representation. The variety of training algorithms and the involvement of the temporal dimension pose more threats to the robustness of SNNs than that of typical neural networks. We account for the vulnerability of SNNs by constructing adversaries based on different differentiable approximation techniques. By deriving a Lipschitz constant specifically for the spike representation, we first theoretically answer the question of how much adversarial invulnerability is retained in SNNs. Hence, to defend against the broad attack methods, we propose a regularized adversarial training scheme with low computational overheads. SNNs can benefit from the constraint of the perturbed spike distance's amplification and the generalization on multiple adversarial $\epsilon$-neighbourhoods. Our experiments on the image recognition benchmarks have proven that our training scheme can defend against powerful adversarial attacks crafted from strong differentiable approximations. To be specific, our approach makes the black-box attacks of the Projected Gradient Descent attack nearly ineffective. We believe that our work will facilitate the spread of SNNs for safety-critical applications and help understand the robustness of the human brain.
Supplementary Material: zip
24 Replies

Loading