Keywords: Spiking Neural Network, Deep Network, Binary output, Brain-Like
Abstract: For the simulation of spikes in biological neurons, a natural fit is the spiking neural networks, which produce binary outputs from spiking neurons. SNN receives arising investigations for its high biological plausibility and efficient inference on neuromorphic chips. However, it is still a challenge to train SNNs with more than 50 layers due to the gradient vanishing problem caused by the spiking neuron layers, which greatly prevents SNNs from going deeper to obtain higher performance. In this paper, we first investigate the variants of spiking residual blocks and find that deep SNNs with binary outputs can not be constructed by simply replacing the activation function in the existing residual structure in ANN with the spiking neuron layer. We thus propose a logic spiking network (LSN) to benefit deep SNN training. Our LSN contains two functionally distinct branches, a structure inspired by excitatory and inhibitory pathways followed by a logical operation for binary spike outputs. We demonstrate both theoretically and experimentally that LSN can implement identity mapping as well as overcome the vanishing gradient problem. Our LSN can be expanded by more than 100 layers with binary outputs and performs favorably against existing spiking ResNet and its variants. Our proposed LSN achieved 94.68% accuracy on CIFAR10, 71.86% accuracy on ImageNet, and 75.1% accuracy on CIFAR10-DVS.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
8 Replies
Loading