Unveiling Robustness of Spiking Neural Networks against Data Poisoning Attacks

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Spiking Neural Networks (SNNs) are gaining attention as a potential evolution of Artificial Neural Networks, mimicking neural computing like the human brain. Known for their energy efficiency and sparse trigger event-driven operation in neuromorphic computing, SNNs’ resilience against adversarial attacks still needs to be explored. This study assesses large-scale SNNs in medical and non-medical datasets and reveals their inefficiency in medical image classification. We introduced three adversarial attacks on SNNs and observed a significant performance drop with increasing attack severity. We mainly proposed a lightweight SNN-based model that outperforms large-scale SNNs in medical image classification and also found robust against different adversarial attacks. We also introduced a novel metric, Attack Diversion Score, to quantify the performance divergence of SNNs during attacks. Our model, employing spatial learning through time, is memory and power-efficient, hence, suitable for computer-aided diagnosis. Using three datasets, our approach is validated against Spiking ResNet-18 and Spiking VGG-11 and found robust against different data poisoning attacks. We affirm the utility of our model through several quantitative and qualitative measures, which have also proven its effectiveness. The source code of our implementation will be publicly available to foster reproducibility and support future research.
Loading