Keywords: parallel training, spiking neural networks
Abstract: Spiking neurons mimic the spatiotemporal dynamics of biological neurons and their spike-based communication, endowing Spiking Neural Networks (SNNs) with biological plausibility and low-power operation. Yet these dynamics impose strict temporal dependencies on neuronal states, preventing parallel training and creating a fundamental bottleneck to efficient, scalable optimization. This work introduces a novel functional perspective to address this challenge. Specifically, we argue that the reset mechanism, which induces state dependencies, should be removed. However, any modification must satisfy two principles: i) preserving — and even enhancing — the functions of reset as a core biological mechanism; and ii) enabling parallel training without sacrificing SNNs’ inherently serial inference, which underpins their energy efficiency. To this end, we identify functions of the reset mechanism and analyze how to reconcile parallel training with serial inference, upon which we propose a dynamic decay spiking neuron that combines a causal convolution structure with an optimized spike firing pattern. We demonstrate the efficiency and effectiveness of our approach across diverse network architectures and task benchmarks, including image classification, neuromorphic event processing, time-series forecasting, and language modeling.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 12827
Loading