LASER: A High-Fidelity Spike Representation SNN Framework With Surrogate-Free Training

15 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spiking Neural Networks, Surrogate-free training, Bit Spike Encoding, Adaptive Spiking Neural Codec, Straight-Through Estimator, Structural approximation, High-fidelity spike representation, Energy efficiency, Large-scale language models, Neuromorphic hardware
TL;DR: LASER confines SNN conversion error to a single controllable nonlinearity, keeping the system interpretable, tunable and deployable. Using BSE, ASNC and a light STE, it reaches near-ANN LLM accuracy at 70B.
Abstract: Spiking Neural Networks (SNNs), as the third generation of neural networks inspired by biological neural systems, demonstrate great potential for energy-efficient computing due to their inherent event-driven sparsity. However, a long-standing core challenge of SNNs lies in the intrinsic error introduced when approximating continuous values with discrete spikes, which makes it difficult to match the accuracy of Artificial Neural Networks (ANNs). To address this issue, we propose a high-fidelity spike representation SNN framework with surrogate-free training, called LASER. Specifically, LASER introduces a precise bi-directional mapping scheme between discrete and continuous values for linear computation. In addition, we design a piecewise-approximate spiking representation for nonlinear functions, enabling high-fidelity forward propagation. Building on this, we propose an STE-based backpropagation strategy to ensure functional consistency with ANNs and to achieve stable training for SNNs. Experiments validate the overall framework, showing that, unlike prior methods where errors diffuse across the network, LASER confines a slight error~(0.6\%) solely to the non-linear module. LASER achieves over 50\% lower perplexity compared to the state-of-the-art SNN framework, with almost lossless accuracy.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5436
Loading