Keywords: energy-based models, Hopfield networks, associative memory, dynamical systems, Jacobian spectral norm, local stability, robustness, regularization
TL;DR: Deep network layers can be seen as a finite number of Hopfield-style energy descent steps, and penalizing upper-quantile Jacobian expansion during training improves finite-step stability and robustness without hurting clean accuracy.
Abstract: Energy-based models describe neural computation as descent on a scalar functional, exemplified by Hopfield networks, whereas modern deep networks are typically viewed as static compositions of nonlinear maps. We show that each layer update can be written as a gradient descent step on a Hopfield-style energy, so network evaluation corresponds to a finite sequence of energy descent steps. We analyze the evolution of small perturbations under this finite-step dynamics without assuming convergence. In a weight-shared setting, perturbation growth is governed by the Jacobian spectral norm, linking stability to weight scale and activation curvature. Motivated by this analysis, we introduce a Jacobian quantile regularizer that suppresses expansive updates during training. Experiments on MNIST show improved robustness to additive perturbations on image while preserving clean accuracy.
Submission Number: 23
Loading