Scalable Multi-Modal Continual Meta-LearningDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Continual meta-learning, Indian Buffet Process, Evidential Sparcification
Abstract: This paper focuses on continual meta-learning, where few-shot tasks are sequentially available and sampled from a non-stationary distribution. Motivated by this challenging setting, many works have been developed with a mixture of meta-knowledge to cope with the heterogeneity and a dynamically changing number of components to capture incremental information. However, the underlying assumption of mutual exclusiveness among mixture components prevents sharing meta-knowledge across different clusters of tasks. Moreover, the existing incremental methods only rely on the prior to determine whether to increase meta-knowledge, where the unlimited increase would lead to parameter inefficiency. In our work, we propose a Scalable Multi-Modal Continual Meta-Learning (SMM-CML) algorithm. It employs a multi-modal premise that not only encourages different clusters of tasks to share meta-knowledge but also maintains their diversity. Moreover, to capture the incremental information, our algorithm uses Indian Buffet Process (IBP) as a prior number of components and proposes a sparsity method based on evidential theory to filter out the components without receiving support information directly from tasks. Thus we can learn the posterior number of components to avoid parameter inefficiency and reduce computational consumption. Experiments show SMM-CML outperforms SOTA baselines, which illustrates the effectiveness of our multi-modal meta-knowledge, and confirms that our algorithm can learn the really need meta-knowledge from tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
8 Replies

Loading