On orthogonality and learning recurrent networks with long term dependenciesDownload PDF

23 Apr 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
TL;DR: While orthogonal matrices improve neural network stability during training, deviating from orthogonality may improve model convergence speed and performance.
Keywords: Deep learning
Conflicts: umontreal.ca, polymtl.ca
15 Replies

Loading