TTFSFormer: A TTFS-based Lossless Conversion of Spiking Transformer

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: ANN-to-SNN conversion has emerged as a key approach to train Spiking Neural Networks (SNNs), particularly for Transformer architectures, as it maps pre-trained ANN parameters to SNN equivalents without requiring retraining, thereby preserving ANN accuracy while eliminating training costs. Among various coding methods used in ANN-to-SNN conversion, time-to-first-spike (TTFS) coding, which allows each neuron to at most one spike, offers significantly lower energy consumption. However, while previous TTFS-based SNNs have achieved comparable performance with convolutional ANNs, the attention mechanism and nonlinear layers in Transformer architectures remains a challenge by existing SNNs with TTFS coding. This paper proposes a new neuron structure for TTFS coding that expands its representational range and enhances the capability to process nonlinear functions, along with detailed designs of nonlinear neurons for different layers in Transformer. Experimental results on different models demonstrate that our proposed method can achieve high accuracy with significantly lower energy consumption. To the best of our knowledge, this is the first work to focus on converting Transformer to SNN with TTFS coding.
Lay Summary: Spiking Neural Networks (SNNs) offer a promising alternative to Artificial Neural Networks (ANNs) due to their energy efficiency and biological plausibility. A widely used approach to build SNNs is converting from pretrained ANNs, which avoids the cost of training from scratch. As Transformers have shown their strong ability in complex tasks, several rate-based Transformer-to-SNN conversion methods have been proposed. However, the nonlinearity of Transformers poses a major challenge in preserving accuracy. We propose a new method that uses time-to-first-spike (TTFS) coding, where each neuron fires only once, making full use of the time information provided by single spikes. To handle the complex operators in Transformers, we introduce a novel neuron model and adapt it for different layers. We also provide a detailed analysis showing that the conversion can be theoretically lossless. Our method first apply TTFS coding to Transformer-to-SNN conversion, achieve comparable performance to ANN while greatly reduce the energy cost. Our work offers a novel way for converting complex architectures into SNNs, providing a possible way to further unlock the full potential of SNNs.
Link To Code: https://github.com/ForestOnTheLand/TTFSFormer.git
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: spiking neural networks, ANN-SNN conversion, time-to-first spike, spiking transformer
Flagged For Ethics Review: true
Submission Number: 8534
Loading