Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: infrastructure, software libraries, hardware, etc.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Mixture of Experts, Soft-Prompts, Task tuning, Compressed LLMs
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: MoP trains LLMs on multiple tasks simultaneously, while mitigating "task interference". Its gating function identifies relevant skills in the input and assign combined experts based on the input; empirically agnostic to model compression techniques.
Abstract: Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new --but often individual-- downstream tasks. Thus, how one would expand prompt tuning to handle --concomitantly-- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of Mixture of Prompts, or MoPs, associated with smart gating functionality: the latter --whose design is one of the contributions of this paper-- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied --for efficiency reasons-- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training "interference'' in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $\sim20$% up to $\sim70$%, as compared to baselines, in the federated scenario, and from $\sim 3$% up to $\sim30$% in the centralized scenario.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6509
Loading