Adversarial Attacks on Spiking Convolutional Networks for Event-based VisionDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: spiking neural networks, neuromorphic engineering, adversarial attacks, dynamic vision sensors
Abstract: Event-based sensing using dynamic vision sensors is gaining traction in low-power vision applications. Spiking neural networks work well with the sparse nature of event-based data and suit deployment on low-power neuromorphic hardware. Being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received very little attention so far. In this work, we show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and to the continuous-time setting of spiking neural networks. We test our methods on the N-MNIST and IBM Gestures neuromorphic vision datasets and show adversarial perturbations achieve a high success rate, while injecting a relatively small number of appropriately placed events. We also verify, for the first time, the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations and possible future directions.
One-sentence Summary: We demonstrate the effectiveness of sparse adversarial attacks, as well as adversarial patches, applied to Dynamic-Vision-Sensor data in spiking CNNs and show the functionality of said attacks on neuromorphic hardware.
Supplementary Material: zip
12 Replies

Loading