Abstract: Spiking Neural Networks (SNNs), with their event-driven and biologically inspired operation, are well-suited for energy-efficient neuromorphic hardware. Neural coding, critical to SNNs, determines how information is represented via spikes. Time-to-First-Spike (TTFS) coding, which uses a single spike per neuron, offers extreme sparsity and energy efficiency but suffers from unstable training and low accuracy due to its sparse firing. To address these challenges, we propose a training framework incorporating parameter initialization, training normalization, temporal output decoding, and pooling layer re-evaluation. The proposed parameter initialization and training normalization mitigate signal diminishing and gradient vanishing to stabilize training. The output decoding method aggregates temporal spikes to encourage earlier firing, thereby reducing the latency. The re-evaluation of the pooling layer indicates that average-pooling keeps the single-spike characteristic and that max-pooling should be avoided. Experiments show the framework stabilizes and accelerates training, reduces latency, and achieves state-of-the-art accuracy for TTFS SNNs on MNIST (99.48%), Fashion-MNIST (92.90%), CIFAR10 (90.56%), and DVS Gesture (95.83%).
External IDs:dblp:journals/corr/abs-2410-23619
Loading