Understanding and Improving Information Transfer in Multi-Task LearningDownload PDF

Published: 20 Dec 2019, Last Modified: 22 Oct 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: A Theoretical Study of Multi-Task Learning with Practical Implications for Improving Multi-Task Training and Transfer Learning
Abstract: We investigate multi-task learning approaches that use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method. We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.
Keywords: Multi-Task Learning
Data: [ChestX-ray14](https://paperswithcode.com/dataset/chestx-ray14), [GLUE](https://paperswithcode.com/dataset/glue), [MPQA Opinion Corpus](https://paperswithcode.com/dataset/mpqa-opinion-corpus), [QNLI](https://paperswithcode.com/dataset/qnli), [SST](https://paperswithcode.com/dataset/sst)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2005.00944/code)
Original Pdf: pdf
12 Replies

Loading