Keywords: Spiking Neural Networks, Expressivity, Approximation Theory, Time-to-First-Spike Coding, Linear Pieces, Parameter Initialization
TL;DR: We introduce the concept of causal pieces to analyse spiking neural networks (SNNs) and demonstrate that their number is a measure of expressivity.
Abstract: We introduce *causal pieces*, a novel concept for analysing spiking neural networks (SNNs), inspired by *linear pieces* used to study expressivity and trainability in artificial neural networks (ANNs). Causal pieces partition the input and parameter space of an SNN into distinct regions where the same subnetwork causes the output spikes of the SNN. For networks of integrate-and-fire neurons with exponential synapses, we show that within each causal piece, output spike times are locally Lipschitz continuous with respect to the input spike times and network parameters. We also prove that the number of causal pieces is a measure of the approximation capabilities of SNNs. Empirically, we find that parameter initialisations yielding more causal pieces on the training set strongly correlate with SNN training success. Remarkably, even SNNs with only positive weights can exhibit a high number of causal pieces, allowing them to achieve competitive performance on diverse benchmarks such as Yin-Yang, MNIST, and EuroSAT, compared to fully-connected ANNs. These results establish causal pieces as a powerful and principled tool for analysing and improving the computational capabilities of SNNs.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 17327
Loading