Abstract: In order to create machine learning systems that serve a variety of users well, it is vital to not only achieve high average performance but also ensure equitable outcomes across diverse groups. However, most machine learning methods are designed to improve a model's average performance on a chosen end task without consideration for their impact on worst group error. Multitask learning (MTL) is one such widely used technique. In this paper, we seek not only to understand the impact of MTL on worst-group accuracy but also to explore its potential as a tool to address the challenge of group-wise fairness. We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself. In settings with few or no group annotations, we find that multitasking often, but not consistently, achieves better worst-group accuracy than Just-Train-Twice (JTT; \citet{pmlr-v139-liu21f}) -- a representative distributionally robust optimization (DRO) method. Leveraging insights from synthetic data experiments, we propose to modify standard MTL by regularizing the joint multitask representation space. We run a large number of fine-tuning experiments across computer vision and natural language processing datasets and find that our regularized MTL approach \emph{consistently} outperforms JTT on both average and worst-group outcomes. Our official code can be found here: \href{https://github.com/atharvajk98/MTL-group-robustness.git}{\url{https://github.com/atharvajk98/MTL-group-robustness}}.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The camera-ready version contains three main updates that address the reviewer's concerns:
1. **Problem Definition Section:** We have introduced a dedicated section that thoroughly defines the problem. This addition aims to enhance the clarity and precision of our research objectives.
2. **Broader Impact Statement:** We have included a more comprehensive impact statement in response to the reviewers' suggestions. This section now delves into the effects of hyperparameters and computation costs, offering a broader perspective on the implications of our work.
3. **Clarification on Synthetic Data Setup vs. Empirical Experiments:** To provide a clearer distinction, we have expanded on the nuances between the synthetic data setup and the actual empirical experiments conducted on real-world data. This elaboration addresses potential ambiguity and ensures a more transparent understanding of our experimental approach.
We appreciate the thorough review process and the thoughtful feedback from the reviewers!
Code: https://github.com/atharvajk98/MTL-group-robustness
Assigned Action Editor: ~Colin_Raffel1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1916
Loading