Impartial Multi-task Representation Learning via Variance-invariant Probabilistic Decoding

ACL ARR 2025 February Submission8104 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Multi-task learning (MTL) enhances efficiency by sharing representations across tasks, but task dissimilarities often cause partial learning, where some tasks dominate while others are neglected. Existing methods mainly focus on balancing loss or gradients but fail to fundamentally address this issue. In this paper, we propose variance-invariant probabilistic decoding for multi-task learning (VIP-MTL), a framework that ensures impartial learning by harmonizing task-specific representation spaces. VIP-MTL decodes task-agnostic shared representations into task-specific probabilistic distributions and applies variance normalization to constrain them to a consistent scale, balancing task influence during training. Experiments on two language benchmarks show that VIP-MTL outperforms 12 comparative methods under the same multi-task settings, especially in heterogeneous and data-constrained scenarios. Further analysis shows that VIP-MTL is robust to sampling distributions, efficient on optimization process, and scale-invariant to task losses. Additionally, the learned task-specific representations are more informative, enhancing the language understanding abilities of pre-trained language models under the multi-task paradigm.

Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: representation learning, multi-task learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 8104
Loading