Aligned Multi Objective Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We develop gradient descent algorithms for solving optimization problems with multiple aligned objectives and providing provable improved performance guarantees of our adaptations.
Abstract: To date, the multi-objective optimization literature has mainly focused on conflicting objectives, studying the Pareto front, or requiring users to balance tradeoffs. Yet, in machine learning practice, there are many scenarios where such conflict does not take place. Recent findings from multi-task learning, reinforcement learning, and LLMs training show that diverse related tasks can enhance performance across objectives simultaneously. Despite this evidence, such phenomenon has not been examined from an optimization perspective. This leads to a lack of generic gradient-based methods that can scale to scenarios with a large number of related objectives. To address this gap, we introduce the Aligned Multi-Objective Optimization framework, propose new algorithms for this setting, and provide theoretical guarantees of its superior performance compared to naive approaches.
Lay Summary: It has become common wisdom that adding tasks or using multiple different datasets can improve the performance of a learning algorithm. We introduce the Aligned Multi-Objective Optimization (AMOO) framework that enables us to investigate this phenomenon from an optimization perspective. Specifically, we design a simple adaptation of gradient descent that is guaranteed to give better performance in the presence of multi-objective feedback when the objective functions are aligned.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/facebookresearch/AlignedMultiObjectiveOptimization
Primary Area: Optimization
Keywords: Multi objective optimization, optimization, multi-task learning
Submission Number: 12151
Loading