Abstract: Specializing the dataset to train narrow spiking neural networks (SNNs) was recently proposed as an efficient approach for processing SNNs. This approach mainly reduces the memory overhead of SNNs, improving the overall processing efficiency and the hardware cost. However, task specialization using narrow independent SNNs leads to a non-negligible degradation in accuracy in some applications. In addition, task specialization in SNNs requires a huge training burden as well as human expertise to design the specialized tasks. In this paper, we propose the use of gated and specialized layers in SNNs to reduce the memory overhead, while maintaining the state-of-the-art accuracy. The proposed solution downsizes the width of an SNN per layer, reuses some specialized units by different classes, and eliminates the training burden that existed in the previous work. Our results show an improvement in inference processing efficiency on a real general-purpose hardware of up to 3x while maintaining the state-of-the-art accuracy.
Loading