TL;DR: Reversing Bit of the Weight and Activation for Spiking Neural Networks
Abstract: The Spiking Neural Network (SNN), a biologically inspired neural network infrastructure, has garnered significant attention recently. SNNs utilize binary spike activations for efficient information transmission, replacing multiplications with additions, thereby enhancing energy efficiency. However, binary spike activation maps often fail to capture sufficient data information, resulting in reduced accuracy.
To address this challenge, we advocate reversing the bit of the weight and activation, called \textbf{ReverB}, inspired by recent findings that highlight greater accuracy degradation from quantizing activations compared to weights. Specifically, our method employs real-valued spike activations alongside binary weights in SNNs. This preserves the event-driven and multiplication-free advantages of standard SNNs while enhancing the information capacity of activations.
Additionally, we introduce a trainable factor within binary weights to adaptively learn suitable weight amplitudes during training, thereby increasing network capacity. To maintain efficiency akin to vanilla \textbf{ReverB}, our trainable binary weight SNNs are converted back to standard form using a re-parameterization technique during inference.
Extensive experiments across various network architectures and datasets, both static and dynamic, demonstrate that our approach consistently outperforms state-of-the-art methods.
Lay Summary: This study has introduced \textbf{ReverB-SNN}, a novel approach for enhancing SNNs by integrating real-valued spike activations with binary weights. Our method addresses the challenge of reduced accuracy in SNNs due to limited information capture by binary spike activation maps. By reversing the bit of both weights and activations, we have preserved the energy-efficient and multiplication-free characteristics of traditional SNNs while significantly boosting the information capacity of activations.
Moreover, the introduction of a trainable factor within binary weights has enabled adaptive learning of weight amplitudes during training, thereby enhancing the overall network capacity. Importantly, to ensure operational efficiency comparable to standard SNNs, we proposed a re-parameterization technique that converts trainable binary weight SNNs back to standard form during inference.
Extensive experimental validation across diverse network architectures and datasets, encompassing both static and dynamic scenarios, consistently demonstrates the superiority of our approach over existing state-of-the-art methods.
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: spiking neural network
Submission Number: 2830
Loading