Training Adversarially Robust SNNs with Gradient Sparsity Regularization

15 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: spiking neural network, robustness, adversarial attack, the gradient with respect to input, gradient sparsity regularization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We improve the robustness of SNNs by regularizing the gradient sparsity of the output probability after the softmax layer with respect to the input image.
Abstract: Spiking Neural Networks (SNNs) have attracted much attention for their energy-efficient operations and biologically inspired structures, offering potential advantages over Artificial Neural Networks (ANNs) in terms of interpretability and energy efficiency. However, similar to ANNs, the robustness of SNNs remains a challenge, especially when facing adversarial attacks. Existing techniques, whether adapted from ANNs or specifically designed for SNNs, have shown limitations in traing SNNs or defending against strong attacks. In this paper, we present a novel approach to enhance the robustness of SNNs through gradient sparsity regularization. We observe that SNNs exhibit greater resilience to random perturbations compared to adversarial perturbations, even at larger scales. Motivated by this finding, we aim to minimize the gap between SNNs under adversarial and random perturbations, thereby improving their overall robustness. To achieve this, we theoretically prove that this performance gap is upper bounded by the gradient sparsity of the output probability after the softmax layer with respect to the input image, laying the groundwork for a practical strategy to train robust SNNs by regularizing the gradient sparsity. The effectiveness of our approach is validated through extensive experiments conducted on the CIFAR-10 and CIFAR-100 datasets. The results demonstrate enhancements in the robustness of SNNs. Overall, our work contributes to the understanding and improvement of SNN robustness, highlighting the importance of considering gradient sparsity in SNNs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 24
Loading