Keywords: Neural architecture search, spiking neural network, knowledge distillation
Abstract: Bridging the performance gap between Spiking Neural Networks (SNNs) and Artificial Neural Networks (ANNs) under low timesteps remains a critical challenge in the SNN community. Recent work uses either ANN-supervised training or automated architecture design to narrow the gap. However, the combination of ANN-supervised training and SNN architecture search remains unexplored, leaving room for further improvement of SNN performance. To address this, we propose Distilling SNN Students from ANN Teachers via Spiking Neural Architecture Search (DSAS) method. It is a training-free spiking neural architecture search method that leverages pre-trained ANN teachers to discover efficient and high-performance SNNs with few timesteps. Specifically, DSAS employs an evolutionary neural architecture search guided by two novel metrics, i.e., Multi-layer Activation Similarity (MAS) and Threshold-guided Gradient Similarity (TGS). MAS aligns ANN and SNN feature maps, yet TGS ensures gradient alignment while tuning spiking activation thresholds. Experiments demonstrate that DSAS achieves state-of-the-art accuracy with four timesteps on both convolution-based and transformer-based search space, effectively narrowing the performance gap of ANN and SNN. For example, DSAS discovers architectures that achieve 65.50% top-1 accuracy on Tiny-ImageNet and 81.97% on CIFAR-100. Available code: https://anonymous.4open.science/r/DSAS-5764
Primary Area: applications to neuroscience & cognitive science
Submission Number: 18211
Loading