Supplementary Material: pdf
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Multi-Objective Optimization, Machine Learning, Optimization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Identifying the inferiority of Pareto stationary solutions given by Multiple Gradient Descent Algorithm (MGDA) under various scenarios and propose to use refined partitioning of variables to fix it (RP-MGDA))
Abstract: Multi-objective optimization is a critical topic in the field of machine learning, with applications in multi-task learning, federated learning, reinforcement learning, and more. In this paper, to elevate the performance of the widely used Multiple Gradient Descent Algorithm (MGDA) in multi-objective optimization tasks, we introduce a novel version of MGDA through Refined Partitioning (RP-MGDA). RP-MGDA leverages the concept of `refined partitioning', where variables are strategically partitioned and grouped in order to improve the optimization process, in contrast to vanilla MGDA which ignores potential variable structure and, as a result, treats all parameters as one variable. Our examples and experiments showcase the effectiveness of RP-MGDA compared to MGDA under various scenarios. We provide insights into the underlying mechanisms of RP-MGDA and demonstrate its potential applications. Specifically, the concept of refined variable partitioning in RP-MGDA is not limited solely to MGDA and holds promise for enhancing other multi-objective gradient methods (e.g., PCGrad, CAGrad).
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2878
Loading