Analytic Insights into Structure and Rank of Neural Network Hessian MapsDownload PDF

21 May 2021, 20:46 (edited 15 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: hessian, rank, neural networks, overparameterization, degeneracy, singularity, loss landscape, degrees of freedom
  • TL;DR: Neural-networks, provably, have much lower effective # of parameters than you think --- as shown by our exact formulas on the Hessian rank.
  • Abstract: The Hessian of a neural network captures parameter interactions through second-order derivatives of the loss. It is a fundamental object of study, closely tied to various problems in deep learning, including model design, optimization, and generalization. Most prior work has been empirical, typically focusing on low-rank approximations and heuristics that are blind to the network structure. In contrast, we develop theoretical tools to analyze the range of the Hessian map, which provide us with a precise understanding of its rank deficiency and the structural reasons behind it. This yields exact formulas and tight upper bounds for the Hessian rank of deep linear networks --- allowing for an elegant interpretation in terms of rank deficiency. Moreover, we demonstrate that our bounds remain faithful as an estimate of the numerical Hessian rank, for a larger class of models such as rectified and hyperbolic tangent networks. Further, we also investigate the implications of model architecture (e.g.~width, depth, bias) on the rank deficiency. Overall, our work provides novel insights into the source and extent of redundancy in overparameterized neural networks.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code:
14 Replies