Towards Robust Multi-Objective Optimization: Adversarial Attack and Defense Methods for Neural Solvers
Keywords: Robustness; Multi-Objective Combinatorial Optimization; Neural Solvers
Abstract: Deep reinforcement learning has shown great promise in addressing multi-objective combinatorial optimization problems. Nevertheless, the robustness and generalizability of existing neural solvers remain insufficiently explored, especially across diverse and complex problem distributions. This work provides a novel preference-based adversarial attack method, which aims to generate hard problem instances that expose vulnerabilities of solvers. We measure the vulnerability of a solver by evaluating the extent to which its performance in terms of hypervolume is affected when tested on hard instances. To mitigate the adversarial effect, we propose a defense method that integrates hardness-aware preference selection into adversarial training, achieving substantial improvements in solver robustness and generalizability.
The experimental results on multi-objective traveling salesman problem (MOTSP), multi-objective capacitated vehicle routing problem (MOCVRP), and multi-objective knapsack problem (MOKP) verify that our attack method successfully learns hard instances for different solvers. Furthermore, our defense method significantly strengthens the robustness and generalizability of neural solvers, delivering superior performance on hard or out-of-distribution instances.
Primary Area: optimization
Submission Number: 18869
Loading