Abstract: Spiking Neural Network (SNN), as a brain-inspired and energy-efficient network, is currently facing the pivotal challenge of exploring a suitable and efficient learning framework. The predominant training methodologies, namely Spatial-Temporal Back-propagation (STBP) and ANN-SNN Conversion, are encumbered by substantial training overhead or pronounced inference latency, which impedes the advancement of SNNs in scaling to larger networks and navigating intricate application domains. In this work, we propose a novel parallel conversion learning framework, which establishes a mathematical mapping relationship between each time-step of the parallel spiking neurons and the cumulative spike firing rate. We theoretically validate the lossless and sorting properties of the conversion process, as well as pointing out the optimal shifting distance for each step. Furthermore, by integrating the above framework with the distribution-aware error calibration technique, we can achieve efficient conversion towards more general activation functions or training-free circumstance. Extensive experiments have confirmed the significant performance advantages of our method for various conversion cases under ultra-low time latency. To our best knowledge, this is the first work which jointly utilizes parallel spiking calculation and ANN-SNN Conversion, providing a highly promising approach for SNN supervised training. Code is available at https://github.com/hzc1208/Parallel_Conversion.
Lay Summary: This work tackles a key challenge in a type of energy-efficient and brain-inspired artificial neural network called Spiking Neural Network (SNN). SNNs are promising for future AI due to their low power usage, but they're hard to train efficiently. The current methods either consume too much computational overhead to train or are slow during actual inference. To solve this, we create a new conversion method that connects how SNNs behave over time with how traditional artificial neural networks work. Our approach allows for faster and more accurate results, even when using limited computing resources. To our best knowledge, this is the first work that successfully combines fast parallel computation with efficient training techniques, paving the way for more practical and powerful SNN-based AI systems.
Link To Code: https://github.com/hzc1208/Parallel_Conversion
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: ANN-SNN Conversion, Parallel Spiking Calculation, Distribution-Aware Error Calibration
Submission Number: 537
Loading