Abstract: Recent years have seen significant progress in developing spiking neural networks (SNNs) as a potential solution to the energy challenges posed by conventional artificial neural networks (ANNs). However, our theoretical understanding of SNNs remains relatively limited compared to the ever-growing body of literature on ANNs. In this paper, we study a discrete-time model of SNNs based on leaky integrate-and-fire (LIF) neurons, referred to as discrete-time LIF-SNNs, a widely used framework that still lacks solid theoretical foundations. We demonstrate that discrete-time LIF-SNNs realize piecewise constant functions defined on polyhedral regions, and more importantly, we quantify the network size required to approximate continuous functions. Moreover, we investigate the impact of latency (number of time steps) and depth (number of layers) on the complexity of the input space partitioning induced by discrete-time LIF-SNNs. Our analysis highlights the importance of latency and contrasts these networks with ANNs that use piecewise linear activation functions. Finally, we present numerical experiments to support our theoretical findings.
Lay Summary: Spiking neural networks (SNNs) are gaining increasing interest as a potentially more energy-efficient alternative to traditional artificial neural networks. However, our theoretical understanding of what SNNs can actually do—such as the types of problems they can solve and the functions they can represent—remains limited. In our work, we take a first step toward addressing these questions by applying tools and concepts commonly used in deep learning research. This foundational insight helps pave the way for better understanding and potentially more effective designs of SNNs in the future.
Primary Area: Theory->Deep Learning
Keywords: Spiking neural networks, Linear regions, Approximation theory
Submission Number: 16203
Loading