Identification of Task Affinity for Multi-Task Learning based on Divergence of Task Data

ICLR 2026 Conference Submission22848 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, Multi-Task Learning, Task Affinity, AutoML
Abstract: Multi-task learning (MTL) can significantly improve performance by training shared models for related tasks. However, due to the risk of negative transfer between mismatched tasks, the effectiveness of MTL hinges on identifying which tasks should be learned together. In this paper, we show that for tabular datasets, this affinity between a pair of tasks can be predicted based on static features that characterize the relationship between the datasets of these tasks. Specifically, we show that we can train a regression model for predicting pairwise task affinity based on computationally efficient features, requiring ground-truth affinity values for only a small, random sample of task pairs to generalize across all possible pairs. We demonstrate on three benchmark tabular datasets that our proposed approach can predict affinity more accurately at lower computational cost than existing methods for identifying task affinity, which treat task data as black boxes and require training-based signals. Our work provides a practical and scalable solution to task grouping for MTL, enabling its effective application to tabular datasets with large numbers of tasks.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 22848
Loading