Achieving Ultra-Low Latency and Lossless ANN-SNN Conversion through Optimal Elimination of Unevenness Error

ICLR 2026 Conference Submission19177 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spiking Neural Networks; ANN-SNN Conversion;Unevenness Error
TL;DR: This paper solves unevenness error in low-latency ANN–SNN conversion via a quantification framework and elimination condition validated by experiments (e.g., 74.74% ImageNet-1K accuracy with ResNet-34 in 8 time steps)
Abstract: Spiking Neural Networks (SNNs) are a promising approach for neuromorphic hardware deployment due to high energy efficiency and biological plausibility. However, existing ANN–SNN conversion methods suffer notable accuracy degradation under low-latency inference, primarily caused by the $\textbf{unevenness error}$. To mitigate this error, prior works commonly adopt trade-off strategies at the cost of higher latency and energy consumption, such as longer time-steps, more complex spiking neuron models, or two-stage inference mechanisms. In this paper, we present a principled and efficient solution to the unevenness error. Specifically, we first develop a unified framework to quantify the unevenness error and then derive a sufficient condition for eliminating it: under an approximately constant input current, matching the ANN quantization function ($\operatorname{floor}$, $\operatorname{round}$, $\operatorname{ceil}$) with the SNN’s initial membrane potential ($0$, $\frac{\theta}{2}$, $\theta$), where $\theta$ is the firing threshold, and setting the quantization level $L$ equals to the number of time-steps $T$, which ensures exact ANN–SNN correspondence. This finding challenges the prevailing belief that more time-steps always yield better accuracy; instead, it reveals that there exists an optimal time-step that matches the ANN’s quantization characteristics, avoiding redundant inference latency from excessive time-steps. Extensive experiments on CIFAR-100, ImageNet-1K, CIFAR10-DVS, and DVS-Gesture validate our theory. For example, our method achieves a state-of-the-art 74.74\% top-1 accuracy on ImageNet-1K using ResNet-34 with only 8 time-steps, demonstrating the effectiveness of our approach in low-latency SNN inference.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 19177
Loading