A High Performance and Low Latency Deep Spiking Neural Networks Conversion FrameworkDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Spiking Neural Network, ANN-SNN Conversion, Object Detection, Object Recognition, Event Camera, ImageNet, MS-COCO
Abstract: Spiking Neural Networks (SNN) are promised to be energy-efficient and achieve Artificial Neural Networks (ANN) comparable performance through conversion processes. However, a converted SNN relies on large timesteps to compensate for conversion errors, which as a result compromises its efficiency in practice. In this paper, we propose a novel framework to convert an ANN to its SNN counterpart losslessly with minimal timesteps. By studying the errors introduced by the whole conversion process, an overlooked inference error is reveald besides the coding error occured during converting. Inspired by the quantization aware traning, a QReLU activation is introduced during training to eliminate the coding error theoretically. Furthermore, a buffered non-leaky-integrate-and-fire neuron that utilizes the same basic operations as in conventional neurons is designed to reduce the inference error. Experiments on classification and detection tasks show that our proposed method attains ANNs level performance using only $16$ timesteps. To the best of our knowledge, it is the first time converted SNNs with low latency demonstrate their capability to achieve high performance on nontrivial vision tasks. Source code will be released later.
TL;DR: We proposed a very high performance and low latency SNN conversion framework by dealing with an overlooked conversion error.
Supplementary Material: pdf
18 Replies

Loading