Overlap-aware influence maximization with balanced replay deep Q-network

Published: 2025, Last Modified: 15 Jan 2026Knowl. Inf. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Influence maximization (IM) in online social networks, an NP-Hard combinatorial optimization problem, has been broadly studied in the past decades due to its broad potential application, such as viral marketing. However, existing algorithms are still limited in accuracy, scalability, and generalization ability. Moreover, they solve the influence overlapping implicitly, which fails to consider the influence overlapping between users and may lead to sub-optimal performance. In this paper, we propose a multi-agents seed selection (MASS) scheme, a reinforcement learning (RL)-based method for IM. Different from previous methods, MASS explicitly accounts for the influence overlapping during propagation process and takes it as a criterion while selecting seed nodes. MASS first estimates the extent of influence overlapping of each candidate node using the Collision Algorithm and then decides whether to accept or drop the candidate node using RL agents. Furthermore, to better adapt the RL structure to our problem settings, BDQN, a balanced deep Q-network, is proposed to enhance training efficiency and model robustness. Experiments on eight real-world social networks validate the effectiveness and efficiency of our proposed algorithm.
Loading