Conversion of sparse Artificial Neural Network to sparse Spiking Neural Network can save up to 99% of energy

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spiking neural networks, ANN-to-SNN conversion dynamic sparse training, Cannistraci-Hebb Training, sustainable AI, energy-efficient architectures
Abstract: Artificial Neural Networks (ANNs) are becoming increasingly important but face the challenge of the large scale and high energy consumption. Dynamic Sparse Training (DST) aims to reduce the memory and energy consumption of ANNs by learning sparse network topologies, which ultimately results in structural connection sparsity. Meanwhile, Spiking Neural Networks (SNNs) have attracted increasing attention due to their biological plausibility and event-driven nature, which ultimately results in temporal sparsity. To bypass the difficulty of directly training SNNs, converting pre-trained ANNs to SNNs (ANN2SNN) is becoming a popular approach to obtain high-performance SNNs. Here for the first time, we investigate the advantage of dynamically sparsely trained ANNs for conversion into sparse SNNs. By adopting Cannistraci-Hebb Training (CHT), a state-of-the-art brain-inspired DST family that resembles synaptic turnover during neuronal connectivity learning in brain circuits, we investigated the extent to which connectivity sparsity impacts the accuracy and theoretical energy efficiency of SNNs across different conversion approaches. The results show that sparse SNNs can achieve accuracy comparable to or even surpassing that of dense SNNs. Moreover, sparse SNNs can reduce theoretical energy consumption by up to 99\% compared with dense SNNs. Furthermore, driven by the interest in understanding the physical dynamics interactions between firing rate and accuracy in SNNs, we systematically analyzed the temporal relationship between the saturation of firing rate and accuracy in SNNs. Our results reveal a significant time lag in which firing rate saturation precedes accuracy saturation. We also demonstrate that the magnitude of time lag is significantly different between sparse and dense networks, where the average time lag of sparse SNNs are higher than dense SNNs. Together, these results demonstrate that Cannistraci-Hebb Training can be effectively integrated into ANN-to-SNN conversion pipelines to obtain SNNs with competitive trade-off between accuracy and theoretical energy.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 16520
Loading