Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: spiking neural network, learning algorithm, low complexity, hardware-friendly
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Backpropagation applied solely to the last time step with surrogate strategies
Abstract: Spiking Neural Networks (SNNs) show promise as energy-efficient models inspired by the brain. However, there is a lack of efficient training methods for deep SNNs with online learning rules that mimic biological systems, particularly for deployment on neuromorphic computing substrates. In this paper, we propose Surrogate Online Learning at Once (SOLO) for SNNs, which utilizes several surrogate strategies that could be implemented in a hardware-friendly manner. By exploiting expanded spatial gradient from only the final time step of forward propagation, SOLO achieves low computational complexity while maintaining comparable accruacy and convergence speed. Moreover, the update rule of SOLO takes the simple form of three-factor Hebbian learning, which could enable online on-chip learning. Our experiments on both static and neuromorphic datasets show that SOLO achieves performance comparable to conventional learning algorithms. Furthermore, SOLO is hardware-friendly, offering robustness against device non-idealities and sparse access during write operations to memory devices.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5382
Loading