Pretraining with Re-parametrized Self-Attention: Unlocking Generalizationin SNN-Based Neural Decoding Across Time, Brains, and Tasks
Keywords: Brain-Machine Interface, Neural Spike Decoding, Spiking Neural Network, Foundation Model
TL;DR: Combining re-parametrized attention with multi-timescale dynamics, we design a lightweight pretrained SNN, achieves SOTA performance on SNN CST decoding tasks and demonstrates strong generalization to unseen conditions with low computational cost.
Abstract: The emergence of large-scale neural activity datasets provides new opportunities to enhance the generalization of neural decoding models. However, it remains a practical challenge to design neural decoders for fully implantable brain-machine interfaces (iBMIs) that achieve high accuracy, strong generalization, and low computational cost, which are essential for reliable, long-term deployment under strict power and hardware constraints.
To address this, we propose the Re-parametrized self-Attention Spiking Neural Network (RAT SNN) with a cross-condition pretraining framework to integrate neural variability and adapt to stringent computational constraints.
Specifically, our approach introduces multi-timescale dynamic spiking neurons to capture the complex temporal variability of neural activity.
We refine spike-driven attention within a lightweight, re-parameterized architecture that enables accumulate-only operations between spiking neurons without sacrificing decoding accuracy.
Furthermore, we develop a stepwise training pipeline to systematically integrate neural variability across conditions, including neural temporal drift, subjects and tasks.
Building on these advances, we construct a pretrained model capable of rapid generalization to unseen conditions with high performance.
We demonstrate that RAT SNN consistently outperforms leading SNN baselines and matches the accuracy of state-of-the-art artificial neural network (ANN) models with much lower computational cost under both seen and unseen conditions across various datasets.
Collectively, pretrained-RAT SNN represents a high-performance, highly generalizable, and energy-efficient prototype of an SNN foundation model for fully iBMI.
Code is available at [RAT SNN GitHub](https://github.com/YangYangYangYuqi/RAT-SNN).
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 3440
Loading