Abstract:Influence Maximization (IM) in graphs tries to identify a subset of influential nodes that maximize the influence spread under a propagation model. Existing works on IM mainly focus on the influence propagation on graphs with pairwise edges, neglecting the effects of hyperedges on influence propagation. A hyperedge connects a set of nodes, and the influence propagation on this hyperedge would trigger multiple nodes simultaneously. In this article, to tackle this issue, we propose an evolutionary deep reinforcement learning algorithm (called HEDRL-IM) for IM in hypergraphs. More specifically, the proposed HEDRL-IM first uses a deep Q network (DQN) to transform IM in hypergraphs into a network weight optimization problem. Then, it combines an evolutionary algorithm with a reinforcement learning to effectively tackle the DQN weight optimization problem. Next, to improve the effectiveness and efficiency, HEDRL-IM incorporates the estimated influence feature to address sparse rewards arising from the intricacies of hypergraph linear threshold model, and proposes a novel simulation optimization process of influence propagation to decrease redundant fitness simulation. Finally, extensive experiments on both synthetic and real-world hypergraphs show that HEDRL-IM outperforms the-state-of-arts in terms of effectiveness in finding seed nodes for influence propagation. The source code of HEDRL-IM is available at https://github.com/1873177187/HEDRL-IM.