Spiking neural networks (SNNs) hold significant promise as energy-efficient alternatives to conventional artificial neural networks (ANNs). However, SNNs require computations across multiple timesteps, resulting in increased latency, heightened energy consumption, and additional memory access overhead. Techniques to reduce SNN latency down to a unit timestep have emerged to realize true superior energy efficiency over ANNs. Nonetheless, this latency reduction often comes at the expense of noticeable accuracy degradation. Therefore, achieving an optimal balance in the tradeoff between accuracy and energy consumption by adjusting the latency of multiple timesteps remains a significant challenge. In this paper, we introduce a new dimension to the accuracy-energy tradeoff space using a novel one-hot multi-level leaky integrate-and-fire (M-LIF) neuron model. The proposed M-LIF model represents the inputs and outputs of hidden layers as a set of one-hot binary-weighted spike lanes to find better tradeoff points while still being able to model conventional SNNs. For image classification on static datasets, we demonstrate M-LIF SNNs outperform iso-architecture conventional LIF SNNs in terms of accuracy ($2$% higher than VGG16 SNN on ImageNet) while still being energy-efficient ($20\times$ lower energy than VGG16 ANN on ImageNet). For dynamic vision datasets, we demonstrate the ability of M-LIF SNNs to reduce latency by $3\times$ compared to conventional LIF SNNs while limiting accuracy degradation ($<1$%).
Keywords: spiking neural networks, leaky integrate-and-fire, energy-efficient, low latency
TL;DR: We propose a one-hot multi-level leaky integrate-and-fire (M-LIF) neuron to reduce the number of timesteps T during SNN inference while improving accuracy and maintaining the low-spike rates of traditional SNNs.
Abstract:
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8534
Loading