Cross-Task Gradient Harmonization for Meta-Learning

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Few-shot Lerning, Meta-learning, Gradient Confliction
TL;DR: This paper propose method to harmonize the gradient confliction among different tasks during gradient-based meta-learning process.
Abstract: We introduce Dynamic Gradient Harmonization, a novel solution to the gradient conflict issue in optimization-based meta-learning. Meta-learning's goal is to adapt quickly to unseen tasks with limited training examples. A number of meta-learning strategies aim to identify an optimal model initialization, subsequently updating the meta-model interactively using gradients from adapted models fine-tuned on a variety of tasks. However, existing methods neglect potential conflicts among meta gradient updates from different tasks, hindering the meta-model's training. In response to this shortcoming, we propose a dynamic gradient harmonization technique. Our proposed technique harmonizes these conflicting gradient updates, enabling a unified, effective meta-model update. This is achieved by computing a primary gradient update from weighted aggregation of gradients from fine-tuned models, using an attention operator to emphasize the primary gradients. We also implement an explore-exploit mechanism to prevent over-commitment to local optima. Experimental results demonstrate the effectiveness of our approach, resulting in more efficient training and improved generalization to new tasks.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2841
Loading