STSF: Spiking Time Sparse Feedback Learning for Spiking Neural Networks

Published: 01 Jan 2025, Last Modified: 22 Jul 2025IEEE Trans. Neural Networks Learn. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Spiking neural networks (SNNs) are biologically plausible models known for their computational efficiency. A significant advantage of SNNs lies in the binary information transmission through spike trains, eliminating the need for multiplication operations. However, due to the spatio-temporal nature of SNNs, direct application of traditional backpropagation (BP) training still results in significant computational costs. Meanwhile, learning methods based on unsupervised synaptic plasticity provide an alternative for training SNNs but often yield suboptimal results. Thus, efficiently training high-accuracy SNNs remains a challenge. In this article, we propose a highly efficient and biologically plausible spiking time sparse feedback (STSF) learning method. This algorithm modifies synaptic weights by incorporating a neuromodulator for global supervised learning using sparse direct feedback alignment (DFA) and local homeostasis learning with vanilla spike-timing-dependent plasticity (STDP). Such neuromorphic global-local learning focuses on instantaneous synaptic activity, enabling independent and simultaneous optimization of each network layer, thereby improving biological plausibility, enhancing parallelism, and reducing storage overhead. Incorporating sparse fixed random feedback connections for global error modulation, which uses selection operations instead of multiplication operations, further improves computational efficiency. Experimental results demonstrate that the proposed algorithm markedly reduces the computational cost with significantly higher accuracy comparable to current state-of-the-art algorithms across a wide range of classification tasks. Our implementation codes are available at https://github.com/hppeace/STSF.
Loading