TL;DR: We propose Fixed-Point Parallel Training (FPT), a method that accelerates Spiking Neural Network training by reducing time complexity from \(O(T)\) to \(O(K)\) using parallel updates, without sacrificing accuracy or changing model architecture.
Abstract: Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive.
In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions.
FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps.
We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our approach.
Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy.
This makes FPT a scalable and efficient solution for real-world applications, particularly for long-duration simulations.
Lay Summary: Training brain-inspired neural networks—known as Spiking Neural Networks (SNNs)—has long been slow and computationally expensive. These models process information sequentially, like flipping through every frame of a long video, which makes training time-consuming.
Our research presents a faster alternative. We developed a method called Fixed-Point Parallel Training (FPT) that replaces this frame-by-frame processing with a few carefully coordinated parallel passes. This significantly speeds up training without altering the model’s structure or requiring additional assumptions.
FPT maintains the accuracy of conventional training methods while greatly reducing computational time. In our experiments, it proved especially effective for large-scale, time-intensive tasks, demonstrating its potential for real-world deployment.
In short, FPT helps brain-like AI systems learn more quickly and efficiently—paving the way for practical, energy-saving applications of neural computing.
Primary Area: Deep Learning->Algorithms
Keywords: Parallel Training, Spiking Neural Networks, Fixed-point Iteration
Submission Number: 6029
Loading