Preference-Aware Mixture-of-Experts for Multi-Objective Combinatorial Optimization

ICLR 2026 Conference Submission18931 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: combinatorial problem; multi-objective optimization; neural network; Preference-aware learning
Abstract: Recent neural methods for multi-objective combinatorial optimization involve solving preference-specific subproblems with a single model and have achieved competitive performance. However, they still suffer from limited learning efficiency and insufficient exploration of the solution space. This paper conducts a theoretical analysis that reveals the equivalence between this single-model paradigm and an implicit Mixture-of-Experts architecture. Furthermore, we propose a Preference-Aware mixture-of-experts (PA-MoE) framework that learns preference-specific representations while explicitly modeling preference-instance interactions. By integrating a sparsely activated expert module with an innovative preference-aware gating mechanism, PA-MoE enhances preference-conditioned representation learning while preserving parameter efficiency. Moreover, PA-MoE is generic and can be applied to three different neural MOCO solvers. Experimental results on the multi-objective traveling salesman problem (MOTSP), multi-objective capacitated vehicle routing problem (MOCVRP), and multi-objective knapsack problem (MOKP) show that PA-MoE is able to generate a Pareto front with higher diversity, achieving superior overall performance.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 18931
Loading