Generalizable Multilevel Graph Optimization via Reinforcement-Guided Diffusion from Simple Subproblems
Keywords: Graph Optimization, Diffusion Models, Reinforcement Learning, Graph Neural Networks
TL;DR: This paper proposes a Reinforcement Learning-Guided Diffusion Model (RLG-DM) to solve multilevel graph combinatorial problems by leveraging structural priors and enabling effective generalization to unseen, complex tasks.
Abstract: Solving multilevel graph combinatorial problems (GCPs) is challenging due to their structural complexity and the limited generalization capabilities of existing learning-based optimization algorithms. Models trained on GCPs often fail to generalize to more complex or larger multilevel problems. We propose a Reinforcement Learning-Guided Diffusion Model (RLG-DM) that addresses this challenge by combining structural priors learned from simple problems. In the forward diffusion process, noise is progressively injected into the structure of a graph neural network, each representing and having been trained on a simple combinatorial optimization problem, such as the facility location or vehicle routing problem. In the reverse diffusion phase, a reinforcement learning controller guides the stepwise generation of subgraphs from a randomly initialized graph by dynamically selecting and combining the pretrained diffusion models based on task-specific hierarchies. Trained only on representative subproblems, the controller generalizes to unseen multilevel GCP tasks without retraining. We evaluate the proposed RLG-DM on representative multilevel GCPs, such as the location routing problem, the nurse rostering problem and \textcolor{red}{flexible job shop scheduling}. Experimental results show that RLG-DM consistently outperforms state-of-the-art baselines and generalizes effectively to structurally diverse, unseen tasks.
Primary Area: optimization
Submission Number: 10676
Loading