Keywords: Multi-objective Learning; Preference-Guided Learning; Constrained Vector Optimization
TL;DR: We cast preference-guided multi-objective learning as a constrained vector optimization problem to capture flexible preferences, under which provably convergent algorithms are developed.
Abstract: Finding specific preference-guided Pareto solutions that represent different trade-offs among multiple objectives is critical yet challenging in multi-objective problems.
Existing methods are restrictive in preference definitions and/or their theoretical guarantees.
In this work, we introduce a Flexible framEwork for pREfeRence-guided multi-Objective learning (**FERERO**) by casting it as a constrained vector optimization problem.
Specifically, two types of preferences are incorporated into this formulation -- the *relative preference* defined by the partial ordering induced by a polyhedral cone, and the *absolute preference* defined by constraints that are linear functions of the objectives.
To solve this problem, convergent algorithms are developed with both single-loop and stochastic variants.
Notably, this is the *first single-loop primal algorithm* for constrained vector optimization to our knowledge.
The proposed algorithms adaptively adjust to both constraint and objective values, eliminating the need to solve different subproblems at different stages of constraint satisfaction.
Experiments on multiple benchmarks demonstrate the proposed method is very competitive in finding preference-guided optimal solutions.
Code is available at https://github.com/lisha-chen/FERERO/.
Primary Area: Optimization (convex and non-convex, discrete, stochastic, robust)
Submission Number: 5826
Loading