Reproducibility Report: Neural Networks Fail to Learn Periodic Functions and How to Fix ItDownload PDF

Published: 01 Apr 2021, Last Modified: 05 May 2023RC2020Readers: Everyone
Keywords: neural networks, periodic functions, extrapolation
Abstract: Reproducibility Summary Scope of Reproducibility Neural Networks Fail to Learn Periodic Functions and How to Fix It demonstrates experimentally that standard activations such as ReLU, tanh, and sigmoid and their variants all fail to learn to extrapolate simple periodic functions. The original paper goes on to propose a new activation, which is named the snake function. The central claims of the paper are two-fold. (1) The properties of the activation functions are carried over to the neural networks. A tanh network will be smooth and extrapolates to a constant function, while ReLU extrapolates in a linear way. Standard neural networks with conventional activation functions are insufficient for extrapolating periodic functions. (2) The proposed activation function manages to learn periodic functions while being able to optimize as well as conventional activation functions. While both experimental proof and theoretical justifications are provided for the claims, we shall only be concerned with testing the claims via experimental means. Methodology While the author was contacted to clarify certain difficulties, the reproduction of all experiments was completed using only the information provided in the original paper itself. With one exception, the link to all datasets used was also provided in the paper itself. This allowed us to implement most experiments from scratch. Results We were able to successfully replicate experiments supporting the central claim of the paper, that the proposed snake nonlinearity can learn periodic functions. We also analyze the suitability of the snake activation for other tasks like generative modeling and sentiment analysis. What was easy Many experiments included descriptions of the neural network architectures and graphs showcasing performance, giving us a clear benchmark to compare our results. What was difficult Data for the human body temperature experiment was not available. Proper implementation details were not given for initializing the weights in neural networks with snake and using snake with RNNs. Communication with original authors One author, Liu Ziyin was contacted to provide the dataset used for the human body temperature experiment, elaborate upon the implementation of variance correction during weight initialization and provide his implementation of RNN using snake. He provided the Github link to his code for the human body temperature, market index, and extrapolation experiments. He also provided an explanation on how to implement variance correction. While the code for the RNN implementation using snake activation was not made public, he provided a screenshot of the same.
Paper Url:
0 Replies