On the Theory of Implicit Deep Learning: Global Convergence with Implicit LayersDownload PDF

28 Sep 2020 (modified: 18 Feb 2021)ICLR 2021 SpotlightReaders: Everyone
  • Keywords: Implicit Deep Learning, Deep Equilibrium Models, Gradient Descent, Learning Theory, Non-Convex Optimization
  • Abstract: A deep equilibrium model uses implicit layers, which are implicitly defined through an equilibrium point of an infinite sequence of computation. It avoids any explicit computation of the infinite sequence by finding an equilibrium point directly via root-finding and by computing gradients via implicit differentiation. In this paper, we analyze the gradient dynamics of deep equilibrium models with nonlinearity only on weight matrices and non-convex objective functions of weights for regression and classification. Despite non-convexity, convergence to global optimum at a linear rate is guaranteed without any assumption on the width of the models, allowing the width to be smaller than the output dimension and the number of data points. Moreover, we prove a relation between the gradient dynamics of the deep implicit layer and the dynamics of trust region Newton method of a shallow explicit layer. This mathematically proven relation along with our numerical observation suggests the importance of understanding implicit bias of implicit layers and an open problem on the topic. Our proofs deal with implicit layers, weight tying and nonlinearity on weights, and differ from those in the related literature.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • One-sentence Summary: We analyze gradient dynamics of a simple deep equilibrium model and mathematically prove its theoretical properties.
14 Replies