Keywords: Spiking Neural Networks, Expressivity, Approximation Theory, Time-to-First-Spike Coding, Linear Pieces, Parameter Initialization
TL;DR: We introduce the concept of causal pieces to analyse spiking neural networks (SNNs) and demonstrate that their number is a measure of expressivity.
Abstract: We introduce a novel concept for spiking neural networks (SNNs) derived from the idea of *linear pieces* used to analyse the expressivity and trainability of artificial neural networks (ANNs). We show that the input and parameter space of an SNN decomposes into distinct regions — which we call *causal pieces* — where the output spikes are caused by the same subnetwork. For integrate-and-fire neurons with exponential synapses, we prove that, within each causal piece, output spike times are locally Lipschitz continuous with respect to the input spike times and network parameters. Furthermore, we prove that the number of causal pieces is a measure of the approximation capabilities of SNNs. In particular, we demonstrate in simulation that parameter initialisations which yield a high number of causal pieces on the training set strongly correlate with SNN training success. Moreover, we find that SNNs with purely positive weights exhibit a surprisingly high number of causal pieces, allowing them to achieve competitive performance on benchmark tasks. We believe that causal pieces are a powerful and principled tool for improving SNNs, and may also provide new ways of comparing SNNs and ANNs in the future.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 17327
Loading