Forward Gradient Training of Spiking Neural Networks

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: spiking neural networks, neuromorphic computing, forward gradient, momentum feedback connections, non-backpropagation training
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Neuromorphic computing with spiking neural networks (SNNs) is promising for energy-efficient applications. However, the supervised learning of SNNs is challenging considering biological plausibility and neuromorphic hardware compatibility. Most existing successful methods rely on backpropagation (BP) through time and across layers for temporal and spatial credit assignments, which is hard to realize. While some online training methods tackle temporal credit assignment by eligibility traces, it remains an important problem for error signal propagation with proper spatial credit assignment. In this work, we propose a new method, forward gradient training (FGT), for spiking neural networks. FGT only leverages unidirectional forward propagation across layers and direct feedback signals from the top layer to calculate gradients for spatial credit assignment, and we improve the large variance of vanilla forward gradients by momentum feedback connections. FGT avoids layer-by-layer forward-backward calculation of BP with symmetric weights and separate phases, and has more theoretical guarantee and better performance compared with random feedback methods. When combined with online training methods, FGT enables forward and online training. This paves solid paths to on-chip SNN training. Extensive experiments demonstrate the effectiveness and robustness of FGT with similar performance as BP under both fully connected and convolutional networks on static and neuromorphic datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4363
Loading