Keywords: spiking neural networks
TL;DR: A pruning method for spiking neuron network about sentivity.
Abstract: Spiking Neural Networks (SNNs) offer compelling advantages in energy efficiency and biological plausibility but face performance and deployment challenges due to redundant structural units in suboptimal architectures. Existing compression techniques predominantly rely on unstructured connection-level pruning, which often necessitates specialized hardware for efficient execution. To overcome these limitations, we propose SPTE (Sensitivity-guided Pruning by Taylor Expansion), a structured pruning framework that leverages Taylor expansion to estimate each convolutional kernel's sensitivity to the loss function during training. This enables the iterative removal of less critical components. Extensive experiments across four benchmark datasets demonstrate the effectiveness of SPTE. Remarkably, SPTE achieves 78.09\% connectivity sparsity on CIFAR10 with a +1.49\% accuracy gain, outperforming previous state-of-the-art methods in both performance and model compactness.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 10351
Loading