On the Importance of Full Rank Initializations in Deep Neural Networks

May 17, 2019 Blind Submission readers: everyone
  • Abstract: Several methods have been proposed over the last few years for initializing the weights of neural networks in order to converge to better solutions or to reduce the time taken for convergence. On the other hand, there have been recent efforts connecting the full rank nature of weight matrices with the optimality of the final converged solution. In this work, we study the connection between popular initialization methods and the conditions necessary at optimal solution using deep linear networks with the squared loss. Through this connection, we attempt to provide a new explanation as to why these different initialization methods work well in practice.
0 Replies

Loading