Tensor decomposition based attention module for spiking neural networks

Published: 01 Jan 2024, Last Modified: 13 Feb 2025Knowl. Based Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The attention mechanism has been proven to be an effective way to improve the performance of spiking neural networks (SNNs). However, from the perspective of tensor decomposition to examine the existing attention modules, we find that the rank of the attention maps generated by previous methods is fixed at 1, lacking the flexibility to adjust for specific tasks. To tackle this problem, we propose an attention module, namely Projected-full Attention (PFA), where the rank of the generated attention maps can be determined based on the characteristics of different tasks. Additionally, the parameter count of PFA grows linearly with the data scale. PFA is composed of the linear projection of spike tensor (LPST) module and attention map composing (AMC) module. In LPST, we start by compressing the original spike tensor into three projected tensors with learnable parameters for each dimension. Then, in AMC, we exploit the inverse procedure of the tensor decomposition process to combine the three tensors into the attention map using a so-called connecting factor. To validate the effectiveness of the proposed PFA module, we integrate it into the widely used VGG and ResNet architectures for classification tasks. Our method achieves state-of-the-art performance on both static and dynamic benchmark datasets, surpassing the existing SNN models with Transformer-based and CNN-based backbones. Code for PFA is available at https://github.com/RisingEntropy/PFA.
Loading