Keywords: neuromorphic computing, spiking neural networks, optimisation, weight initialization
Abstract: Spiking Neural Networks (SNNs) offer advantages such as sparsity and ultra-low power consumption, making them a promising alternative to conventional neural networks (ANNs). However, training deep SNNs is challenging due to the quantization of membrane potentials into binary spikes, which can cause information loss and vanishing spikes in deeper layers. Traditional weight initialization methods from ANNs are often used in SNNs without accounting for their distinct computational properties. In this work, we derive an optimal weight initialization method tailored for SNNs, specifically taking into account the quantization operation. We demonstrate through theoretical analysis and simulations with up to 100 layers that our method enables the propagation of activity in deep SNNs without loss of spikes. Experiments on MNIST confirm that the proposed initialization scheme leads to higher accuracy, faster convergence, and robustness against variations in network and neuron hyperparameters.
Submission Number: 12
Loading