Abstract: Spiking neural networks (SNNs) have widely drew attention of recent research. With brain-spired dynamics and spike-based communication, SNN is supposed to be a more energy-efficient neural network than existing artificial neural network (ANN). To make better use of the temporal sparsity of spikes and spatial sparsity of weights in SNN, this paper presents a sparse SNN accelerator. It adopts a novel self-adaptive spike compressing and decompressing (SASCD) mechanism for different input spike sparsity, as well as on-chip compressed weight storage and processing. We implement the octa-core design on field programmable gate array (FPGA). The results demonstrate a peak performance of 35.84 GSOPs/s, which is equivalent to 358.4 GSOPs/s in dense SNN accelerators for 90% weight sparsity. For the single-layer perceptron model in rate coding implemented on the hardware, SASCD reduces the time step intervals from 2.15 $\mu$ s to 0.55 $\mu$ s.
Loading