Reasonable Gradients for Online Training Algorithms in Spiking Neural Networks

Published: 01 Jan 2024, Last Modified: 15 May 2025ECAI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Spiking neural networks (SNNs) have the potential to simulate sparse and spatio-temporal dynamics observed in biological neurons, making them promising for achieving energy-efficient artificial general intelligence. While backpropagation through time (BPTT) ensures reliable precision for training SNNs, it is hampered by high computation and storage complexity and does not conform to the instantaneous learning mechanism in brains. On the contrary, online training algorithms, which are biologically interpretable, offer low latency and memory efficiency, and are well-suited for on-chip learning applications. However, recent research exhibit a deficiency in the scientific comprehension of online gradients, which leads to certain limitations. To address this issue, we conduct an in-depth analysis of the calculation deviation in chain derivations induced by weight update and find two pivotal factors that affect the accuracy of online gradients: completeness and timeliness. To further enhance the performance of online training leveraging these findings, we propose spatio-temporal online learning (STOL), which substantially ameliorates the accuracy of the online gradients and demonstrates superior computation and memory efficiency. Our experiments on CIFAR-10, CIFAR-100, ImageNet, CIFAR10-DVS, and DVS128-Gesture datasets demonstrate that our method achieves state-of-the-art performance across most of these tasks. Besides, it shows a great improvement compared with existing online training algorithms.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview