Keywords: Multitask Learning, Cooperative Game Theory, Centralized Bargaining Theory
TL;DR: We propose a multitask learning algorithm invariant to monotonic nonaffine transformations inspired from cooperative bargaining theory.
Abstract: Multitask learning (MTL) algorithms typically rely on schemes that combine different task losses or their gradients through weighted averaging. These methods aim to find Pareto stationary points by using heuristics that require access to task loss values, gradients, or both. In doing so, a central challenge arises because task losses can be arbitrarily, nonaffinely scaled relative to one another, causing certain tasks to dominate training and degrade overall performance. A recent advance in cooperative bargaining theory, the Direction-based Bargaining Solution ($\texttt{DiBS}$), yields Pareto stationary solutions immune to task domination because of its invariance to monotonic nonaffine task loss transformations. However, the convergence behavior of $\texttt{DiBS}$ in nonconvex MTL settings is currently not understood. To this end, we prove that under standard assumptions, a subsequence of $\texttt{DiBS}$ iterates converges to a Pareto stationary point when task losses are possibly nonconvex, and propose $\texttt{DiBS-MTL}$, a computationally efficient adaptation of $\texttt{DiBS}$ to the MTL setting. Finally, we validate $\texttt{DiBS-MTL}$ empirically on standard MTL benchmarks, showing that it achieves competitive performance with state-of-the-art methods while maintaining robustness to nonaffine monotonic transformations that significantly degrade the performance of existing approaches, including prior bargaining-inspired MTL methods.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 21587
Loading