Scale-conditioned Adaptation for Large Scale Combinatorial OptimizationDownload PDF

Published: 21 Oct 2022, Last Modified: 16 May 2023NeurIPS 2022 Workshop DistShift PosterReaders: Everyone
Keywords: Combinatorial Optimization, Scalability, Adaptation, Transfer Learning, Reinforcement Learning
TL;DR: This paper suggest an effective adaptation scheme for large scale combinatorial optimization problem
Abstract: Deep reinforcement learning (DRL) for combinatorial optimization has drawn attention as an alternative for human-designed solvers. However, training DRL solvers for large-scale tasks remains challenging due to combinatorial optimization problems' NP-hardness. This paper proposes a novel \textit{scale-conditioned adaptation} (SCA) scheme that improves the transferability of the pre-trained solvers on larger-scale tasks. The main idea is to design a scale-conditioned policy by plugging a simple deep neural network, denoted as \textit{scale-conditioned network} (SCN), into the existing DRL model. SCN extracts a hidden vector from a scale value, and then we add it to the representation vector of the pre-trained DRL model. The increment of the representation vector captures the context of scale information and helps the pre-trained model effectively adapt the policy to larger-scale tasks. Our method is verified to improve the zero-shot and few-shot performance of DRL-based solvers in various large-scale combinatorial optimization tasks.
1 Reply

Loading