Controlling Neural Network Smoothness for Algorithmic Neural ReasoningDownload PDF

Anonymous

04 Oct 2022 (modified: 05 May 2023)Submitted to nCSI WS @ NeurIPS 2022Readers: Everyone
Keywords: neurosymbolic systems, Gaussian process, adversarial robustness, neural algorithmic reasoning
TL;DR: We show inherent problems of neural networks emulating even simple algorithmic tasks which, however, may be partially improved with smoothness priors inspired by Gaussian processes.
Abstract: The modelling framework of neural algorithmic reasoning (Veličković & Blundell, 2021) postulates that a continuous neural network may learn to emulate the discrete reasoning steps of a symbolic algorithm. The purpose of this study is to investigate the underlying hypothesis in the most simple conceivable scenario – the addition of real numbers. We find that two layer neural networks fail to learn the structure of this task and that growing the network’s width leads to a complex division of input space. This behaviour can be emulated with Gaussian processes using radial basis function kernels of decreasing length scale. Classical results establish an equivalence between Gaussian processes and infinitely wide neural networks. We demonstrate a tight link between the scaling of a network weights’ standard deviation and its effective length scale on a sinusoidal regression problem – suggesting simple modifications to control the smoothness of the function learned by a neural network. This provides a partial remedy to the brittleness of neural network predictions. We validate this further in the setting of adversarial examples where we demonstrate the gains in robustness that our modification achieves on a standard classification problem of handwritten digit recognition. In conclusion, we show inherent problems of neural networks emulating even simple algorithmic tasks which, however, may be partially improved with smoothness priors inspired by Gaussian processes.
1 Reply

Loading