Keywords: Spiking Neural Networks (SNNs), Rotary Positional Encoding (RoPE), Central Pattern Generator (CPG), Neuromorphic Computing, Time-series Forecasting, Text Classification
Abstract: Positional signals in spiking neural networks (SNNs) suffer distortion due to spike binarization and the nonlinear dynamics of Leaky Integrate-and-Fire (LIF) neurons, which compromises self-attention mechanisms. We introduce Spiking-RoPE, a spiking-friendly relative rotary positional encoding that applies two-dimensional spatiotemporal position-dependent rotations to queries/keys prior to
binarization, ensuring that relative phase kernels are preserved in statistical expectation under LIF dynamics while maintaining content integrity. Building on this core, we propose Spiking Fused-PE (SF-PE), a scheme that fuses absolute CPG-based spikes with Spiking-RoPE. The resulting attention score decomposes into complementary row/column (absolute) and diagonal (relative) structures, thereby expanding the representable function space. We validate our method across two diverse domains (time-series forecasting and text classification) on Spikformer, Spike-driven Transformer, and QKFormer backbones. SF-PE consistently improves accuracy and enhances length extrapolation capabilities. Ablations on rotation bases and 1D vs. 2D variants support the design. These results establish rotary encoding as an effective, spiking-friendly relative PE for SNNs and demonstrate that fusing absolute and relative signals yields synergistic benefits under spiking constraints. Code: https://anonymous.4open.science/r/SNN-RoPE-F6DE.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 16019
Loading