Abstract: Reinforcement learning to solve graph optimization problems has attracted increasing attention recently. Typically, these models require extensive training over numerous graph instances to develop generalizable strategies across diverse graph types, demanding significant computational resources and time. Instead of tackling these problems one by one, we propose to employ transfer learning to utilize knowledge gained from solving one graph optimization problem to aid in solving another. Our proposed framework, dubbed the State Extraction with Transfer-learning (SET), focuses on quickly adapting a model trained for a specific graph optimization task to a new but related problem by considering the distributional differences among the objective values between the graph optimization problems. We conduct a series of experimental evaluations on graphs that are both synthetically generated and sourced from real-world data. The results demonstrate that SET outperforms other algorithmic and learning-based baselines. Additionally, our analysis of knowledge transferability provides insights into the effectiveness of applying models trained on one graph optimization task to another. Our study is one of the first studies exploring transfer learning in the context of graph optimization problems.
Loading