Non Vanishing Gradients for Arbitrarily Deep Neural Networks: a Hamiltonian System ApproachDownload PDF

Published: 17 Oct 2021, Last Modified: 05 May 2023DLDE Workshop -- NeurIPS 2021 PosterReaders: Everyone
Keywords: Deep neural networks, Hamiltonian systems, ODE discretization
Abstract: Deep Neural Networks (DNNs) training can be difficult due to vanishing or exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stems from the discretization of continuous-time Hamiltonian systems. Our main result is that a broad set of H-DNNs ensures non-vanishing gradients by design for an arbitrary network depth. This is obtained by proving that, using a semi-implicit Euler discretization scheme, the backward sensitivity matrices involved in gradient computations are symplectic.
Publication Status: This work is unpublished.
4 Replies

Loading