Efficient Learning in Neural Networks without Gradient Backpropagation

ICLR 2025 Conference Submission316 Authors

13 Sept 2024 (modified: 25 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Brain-inspired, Learning algorithm, Learning efficiency
TL;DR: An efficient non-BP algorithm with mathematical convergence guarantees , achieving state-of-the-art performance in deep spiking neural networks while overcoming catastrophic forgetting.
Abstract: The brain possesses highly efficient learning algorithms that have not been fully understood. The gradient backpropagation (BP) serves as a powerful tool for training artificial neural networks, but it diverges from the known anatomical and physiological constraints of the brain. Conversely, biologically plausible learning algorithms have efficiency limitations in training deep neural networks. To bridge this gap, we introduce a perturbation-based approach called low-rank cluster orthogonal (LOCO) weight modification. Theoretical analysis shows that LOCO provides an unbiased estimate of the BP gradient and achieves low variance in gradient estimation. Compared with some brain-inspired algorithms, LOCO keeps mathematical convergence guarantees and improves the efficiency. It can train the deepest spiking neural networks to date without gradient backpropagation, achieving state-of-the-art performance on several benchmark datasets and exhibiting the ability to overcome catastrophic forgetting. These findings suggest that biologically feasible learning methods can be substantially more efficient than previously believed. Furthermore, avoiding gradient backpropagation allows LOCO to achieve O(1) time complexity for weight updates. This opens a promising avenue for developing distributed computing systems that are more efficient than BP-based counterparts.
Supplementary Material: pdf
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 316
Loading