Abstract: This paper provides an innovative method to approximate the optimal solution to the maximal stable set problem, a typical NP-hard combinatorial optimization problem. Different from traditional greedy or heuristic algorithms, we combine graph embedding and DQN-based reinforcement learning to make this NP-hard optimization problem trainable so that the optimal solution over new graphs can be approximated. This appears to be a new approach in solving maximal stable set problem. The learned policy is to choose a sequence of nodes incrementally to construct the stable set, with action determined by the outputs of graph embedding network over current partial solution. Our numerical experiments suggest that the proposed algorithm is promising in tackling the maximum stable independent set problem.
Loading