Deep spiking neural networks based on model fusion technology for remote sensing image classification

Published: 01 Jan 2025, Last Modified: 14 May 2025Eng. Appl. Artif. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The spiking neural network (SNN) based on brain inspiration, as the third-generation neural network, has attracted great research interest due to its ultra-low power event-driven data processing method. How to obtain high-accuracy deep networks has always been a challenge in the field of SNN. At present, there are two main methods for training deep spiking neural networks (SNNs). The first is the spike-based spatiotemporal backpropagation (STBP), and the other indirect is to convert the trained artificial neural network (ANN) into SNN (ANN-SNN). Algorithms that directly train SNNs are usually inefficient, while the ANN-SNN-based models require a long inference time and also suffer from performance losses. The model fusion technology (MFT) proposed in this paper, combined with ANN-SNN and STBP, provides a new training paradigm for obtaining deep SNNs. We propose a bilateral multi-strength integrate-and-fire (BM-IF) spiking neuron for ANN-SNN to simplify the conversion operation. Under the same network architecture and encoding time window, the ResNet34 based on our algorithm achieved State-of-the-art (SOTA) in ImageNet. At the same time, we combined transfer learning methods to achieve SOTA results on three remote sensing scene classification datasets. Our results also indicate that the MFT-based SNN outperforms the optimal inference accuracy provided by ANN-SNN in terms of network architecture and dataset inference time steps reduced by 512-1250 times. It can be seen that this algorithm provides a low-latency and high-precision training scheme for the development of deep SNN.
Loading