[Re] Temporal Spike Sequence Learning via Backpropagationfor Deep Spiking Neural NetworksDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: spiking-neural-networks, deep-neural-networks, image-classification, reproduction
Abstract: Scope of Reproducibility --------------------------------- In this report, we reproduce the results of a novel learning method for Spiking Neural Networks (SNN) proposed by Zhang and Li (2020). The proposed learning method utilises biologically more plausible neuron interactions than existing SNN algorithms. The original paper claims that the method can produce higher performance than other state-of-the-art SNN learning algorithms on datasets such as MNIST and CIFAR-10, whilst utilising fewer timesteps. Methodology ------------------ For the reproduction of the experiments in the paper, we used the author’s original source code with minimal additions; logging facilities and plotting functionality. We also performed an additional hyperparameter search experiment for the MNIST dataset. The experiments were run on two different GPUs, an NVIDIA Tesla V100-PCIE-32GB GPU and an NVIDIA Titan RTX. The total GPU runtimes were 150h 26m and 56h 4m, respectively. Additional to the experiments performed, we inspected the theoretical equations in the original paper. We then scrutinised the source code and the specific implementations of the mathematical equations. Results ---------- Due to high computational requirements, we reproduced two out of four experiments from the original paper. Overall, the results match within a reasonable margin as reported in the paper. A Bayesian hyperparameter search, through different combinations of parameters, revealed some insights about the stability of the learning system. What Was Easy --------------------- The original paper is well-written, with clear explanations of the models and the learning algorithm. We also thank the authors for publishing their source code online. This made the reproduction study easier and more fruitful. The source code was written in an understandable way and the authors provided clear general instructions to rerun the networks. What Was Difficult ------------------------- The computationally demanding nature of the networks yields it challenging for us to reproduce all the experiments in the paper. Additionally, we discovered some parameters that were hardcoded in the source code without explanations. We could not find their particular contexts in the original paper. Communication with Original Authors ---------------------------------------------------- We contacted the authors at the NeurIPS 2020 conference poster session to ask questions about the paper, which they kindly clarified. We also provided some feedback via email regarding theoretical equations and code implementation.
Paper Url: https://openreview.net/forum?id=F9QmDvKqv09&noteId=T-q1iiWQZnB
4 Replies

Loading