Accelerating BPTT-Based SNN Training with Sparsity-Aware and Pipelined Architecture

Published: 2024, Last Modified: 15 May 2025ISCAS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: On-chip learning of Spiking Neural Networks (SNN) has been extensively researched to enhance adaptability and privacy protection, with Back-Propagation-Through-Time (BPTT) emerging as the top-performing method despite its resourceintensive nature. In this paper, we propose a dedicated training processor that accelerates the BPTT algorithm for SNNs. We analyze the bottlenecks and optimization opportunities in SNN- BPTT and introduce novel techniques such as recalculation of membrane potentials to reduce redundant data movement. Additionally, we implement a pipeline architecture with heterogeneous computing cores to maximize hardware utilization and parallelism. Exploiting three types of sparsity in BPTT allows us to skip unnecessary computations and memory access, further optimizing performance. The proposed processor, implemented using 40nm CMOS technology, achieves simulation results with an advanced training energy efficiency of 0.86pJ/OP.
Loading