SHOT: Suppressing the Hessian along the Optimization Trajectory for Gradient-Based Meta-Learning

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: meta learning, Hessian, Gradient-Based meta learning, Feature Reuse, Implicit Prior
TL;DR: We propose our algorithm plug-and-play algorithm SHOT (Suppressing the Hessian along the Optimization Trajectory) in meta learning . SHOT works both algorithm-architecture agnostic with maintaining computation complexity.
Abstract: In this paper, we hypothesize that gradient-based meta-learning (GBML) implicitly suppresses the Hessian along the optimization trajectory in the inner loop. Based on this hypothesis, we introduce an algorithm called SHOT (Suppressing the Hessian along the Optimization Trajectory) that minimizes the distance between the parameters of the target and reference models to suppress the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does not increase the computational complexity of the baseline model much. It is agnostic to both the algorithm and architecture used in GBML, making it highly versatile and applicable to any GBML baseline. To validate the effectiveness of SHOT, we conduct empirical tests on standard few-shot learning tasks and qualitatively analyze its dynamics. We confirm our hypothesis empirically and demonstrate that SHOT outperforms the corresponding baseline.
Supplementary Material: zip
Submission Number: 4552
Loading