Adaptation-Agnostic Meta-TrainingDownload PDF

Published: 14 Jul 2021, Last Modified: 22 Oct 2023AutoML@ICML2021 PosterReaders: Everyone
Keywords: meta-learning
Abstract: Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update. The normal meta-training strategy needs to differentiate through the inner-task adaptation procedure to optimize the meta-parameters. This leads to a constraint that the inner-task algorithms should be solved analytically. Under this constraint, only simple algorithms with analytical solutions can be applied as the inner-task algorithms, limiting the model expressiveness. To lift the limitation, we propose an adaptation-agnostic meta-training strategy. Following our proposed strategy, we are capable to apply stronger algorithms (e.g., an ensemble of different types of algorithms) as the inner-task algorithm to achieve superior performance comparing with popular baselines.
Ethics Statement: In this paper, we provided a unified view on the commonly used meta-training strategy and proposed an adaptation-agnostic meta-training strategy which is more general, flexible and less prone to overfitting. It can provide new insights to the meta-learning community and inspire researchers to provide new meta-algorithms.
Crc Pdf: pdf
Poster Pdf: pdf
Original Version: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2108.10557/code)
4 Replies

Loading