Multifactorial evolutionary deep reinforcement learning for multitask node combinatorial optimization in complex networks

Published: 2025, Last Modified: 01 Aug 2025Inf. Sci. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The node combinatorial optimization (NCO) tasks in complex networks aim to activate a set of influential nodes that can maximally affect the network performance under certain influence models, including influence maximization, robustness optimization, minimum node coverage, minimum dominant set, and maximum independent set, and they are usually nondeterministic polynomial (NP)-hard. The existing works mainly solve these tasks separately, and none of them can effectively solve all tasks due to their difference in influence models and NP-hard property. To tackle this issue, in this article, we first theoretically demonstrate the similarity among these NCO tasks, and model them as a multitask NCO problem. Then, we transform this multitask NCO problem into the weight optimization of a multi-depth Q network (multi-head DQN), which adopts a multi-head DQN to model the activation of influential nodes and uses the shared head and unshared output DQN layers to capture the similarity and difference among tasks, respectively. Finally, we propose a Multifactorial Evolutionary Deep Reinforcement Learning (MF-EDRL) for solving the multitask NCO problem under the multi-head DQN optimization framework, which enables to promote the implicit knowledge transfer between similar tasks. Extensive experiments on both benchmark and real-world networks show the clear advantages of the proposed MF-EDRL over the state-of-the-art in tackling all NCO tasks. Most notably, the results also reflect the effectiveness of information transfer between tasks in accelerating optimization and improving performance.
Loading