Meta-Knowledge Extraction: Uncertainty-Aware Prompted Meta-Learning

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Meta-Learning, Prompt Tuning, Bayesian Inference
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Conventional meta-learning typically involves adapting all meta-knowledge to specific tasks, which incurs high computational costs due to the adaption process. To address this limitation, we introduce a more efficient gradient-based meta-learning framework called Uncertainty-Aware Prompted Meta-Learning (UAPML). Instead of adapting the entire meta-knowledge, we introduce a meta-knowledge extraction paradigm inspired by the success of large language models. In this paradigm, we freeze the model backbone and employ task-specific prompts to extract meta-knowledge for few-shot tasks. To construct the task-specific prompts, a learnable Bayesian meta-prompt is employed to provide an ideal initialization. Through theoretical analysis, we demonstrate that the posterior uncertainty of the Bayesian meta-prompt aligns with that of the task-specific prompt, which can be used to modulate the construction of task-specific prompts. Accordingly, we propose two ways, i.e., the soft and hard way, to automatically construct task-specific prompts from the meta-prompt when dealing with new tasks. Experimental results demonstrate the efficiency of the meta-knowledge extraction paradigm and highlight the significantly reduced computational cost achieved by our UAPML framework without the degradation of performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4840
Loading