Towards efficient deep spiking neural networks construction with spiking activity based pruning

16 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: spiking neural networks, pruning, network structure, power consumption
TL;DR: Towards efficient deep spiking neural networks construction with spiking activity based pruning
Abstract: Spiking neural networks (SNNs) drawing inspiration from the biological nervous system possess the distinctive advantage of being biologically interpretable and energy-efficient. In recent years, there has been a rise in deep and large-scale SNNs structures that exhibit high performance across various complex datasets. However, within these structures, a significant number of redundant structural units are often present, compelling the need to compress the network models of SNNs to more effectively harness their low-power advantage. Currently, most model compression techniques for SNNs are based on unstructured pruning of individual connections, which requires specific hardware support. Receptive field cells in the biological visual system have influenced a crucial concept in deep learning: convolutional kernels. Hence, we propose a structured pruning approach based on the activity levels of convolutional kernels named Spiking Channel Activity-based (SCA) network pruning framework. Inspired by synaptic plasticity mechanisms, our method dynamically adjusts the network's structure by pruning and regenerating convolutional kernels during training, enhancing the model's adaptation to the current target task. While maintaining model performance, this approach refines the network architecture, ultimately reducing computational load and accelerating the inference process. We conducted experiments on static datasets including CIFAR10, CIFAR100 and DVS-CIFAR10. Experimental results demonstrate that this method incurs only about 2% accuracy loss while retaining 20% of the channels. This indicates that structured dynamic sparse learning methods can better facilitate the application of deep SNNs in low-power and high-efficiency scenarios.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 528
Loading