An Approximate Computing-Based Spiking Neural Networks Neuron Model and STDP Learning

Haihang Xia, Haotian Liu, Yuqin Zhao, John Goodenough, Charith Abhayaratne, Sichen Liu, Sizhao Li, Tiantai Deng

Published: 01 Jan 2025, Last Modified: 28 Feb 2026IEEE Transactions on Circuits and Systems I: Regular PapersEveryoneRevisionsCC BY-SA 4.0
Abstract: Spiking Neural Networks (SNNs) show great potential in applications such as image processing, robotics, and communications. However, the vast number of neuron models and learning algorithms in large-scale SNNs impose significant hardware and energy overhead, with multiplication remaining the most critical operation. Thus, to address this challenge, this paper presents the hardware design of the Logarithmic Linear Multiply (LLMu) and Logarithmic Linear Segmented Multiply (LLSMu). These two components are specifically designed for neuron models and learning algorithms, achieving high accuracy with low hardware resource utilization and energy consumption. To demonstrate the capabilities of LLMu and LLSMu, we implement two mainstream SNN neuron models—Leaky Integrate-and-Fire (LIF) and Izhikevich—as well as the Spike Timing-Dependent Plasticity (STDP) learning algorithm, and compare their performance with state-of-the-art approaches on FPGA and ASIC platforms. The scope of this work is limited to these models and algorithms. The LLMu- and LLSMu-based implementations exhibit significantly improved energy efficiency over existing methods. Specifically, in the FPGA implementation, the LLSMu-based LIF neuron model achieves a $6.75\times $ improvement, the LLSMu-based Izhikevich neuron model achieves a $2.70\times $ to $3.72\times $ improvement, and the LLMu-based STDP achieves a $21.03\times $ to $48.78\times $ improvement in energy efficiency. In the ASIC implementation, the LLSMu-based Izhikevich neuron model further improves energy efficiency by $5.58\times $ to $5.69\times $ , while the LLMu-based STDP achieves $5.96\times $ and $3.69\times $ improvements compared to prior designs.
Loading