Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less DataDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 PosterReaders: Everyone
Keywords: Multi-Task Learning, Adaptive Learning, Transfer Learning, Natural Language Processing, Hypernetwork
Abstract: Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer based Hypernetwork Adapter consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction, we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: Can multi-task outperform single task fine-tuning? CA-MTL is a new method that shows that it is possible with task conditioned model adaption via a Hypernetwork and uncertainty sampling.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) CAMTL/CA-MTL](https://github.com/CAMTL/CA-MTL)
Data: [CoLA](https://paperswithcode.com/dataset/cola), [GLUE](https://paperswithcode.com/dataset/glue), [MRPC](https://paperswithcode.com/dataset/mrpc), [MRQA](https://paperswithcode.com/dataset/mrqa-2019), [MultiNLI](https://paperswithcode.com/dataset/multinli), [QNLI](https://paperswithcode.com/dataset/qnli), [SNLI](https://paperswithcode.com/dataset/snli), [SST](https://paperswithcode.com/dataset/sst), [SST-2](https://paperswithcode.com/dataset/sst-2), [SciTail](https://paperswithcode.com/dataset/scitail), [SuperGLUE](https://paperswithcode.com/dataset/superglue), [WNUT 2017](https://paperswithcode.com/dataset/wnut-2017-emerging-and-rare-entity)
12 Replies

Loading