Unraveling human social behavior motivations via inverse reinforcement learning-based link prediction

Published: 01 Jan 2024, Last Modified: 05 Mar 2025Computing 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview