Meta Learning with Minimax RegularizationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: meta learning, generalization, minimax regularization
Abstract: Even though meta-learning has attracted research wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptation-level, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning -- Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outer-loop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.
Supplementary Material: zip
5 Replies

Loading