Keywords: optimizer, gradient-based learning, power allocation, quadratic form
TL;DR: Gradient-based optimizers translates force into parameter motion, and we can use this analogy to design a problem-specific optimizer.
Abstract: We lay the theoretical foundation for automating optimizer design in gradient-based learning. Based on the greedy principle, we formulate the problem of designing optimizers as maximizing the instantaneous decrease in loss. By treating an optimizer as a function that translates loss gradient signals into parameter motions, the problem reduces to a family of convex optimization problems over the space of optimizers. Solving these problems under various constraints not only recovers a wide range of popular optimizers as closed-form solutions, but also produces the optimal hyperparameters of these optimizers with respect to the problems at hand. This enables a systematic approach to design optimizers and tune their hyperparameters according to the gradient statistics collected from training or validation sets. Furthermore, this optimization of optimization can be performed dynamically during training.
Primary Area: learning theory
Submission Number: 16641
Loading