Keywords: Multi-task Learning, Autonomous Driving
Abstract: Multi-task learning is favored due to its efficiency and potential transfer learning achieved by sharing networks across tasks. While a series of multi-task optimization algorithms (MTOs) have been proposed to solve MTL optimization challenges and enhance performance, recent research claims that simple linear scalarization, which sums per-task loss with a carefully searched weight set, is sufficient, casting doubt on the added value of more complex MTO algorithms. In this paper, we provide a novel perspective that linear scalarization and MTOs are closely related and can be combined to yield high performance and efficiency. We show, for the first time, that a well-performing linear scalarization exhibits specific characteristics of certain optimization metrics proposed by MTOs, such as high task gradient magnitude similarity and low condition number, via an extensive empirical study. We then propose AutoScale, an efficient pipeline that leverages these influential metrics to guide the search for optimal linear scalarization weights. AutoScale shows superior performance than prior MTOs and performs close to the searched weight performance consistently across different datasets.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5346
Loading