Abstract: Knowledge Graph (KG) reasoning plays a crucial role in knowledge graph completion, as it involves the reasoning of unknown information based on the existing knowledge in the graph. Most current reasoning methods in reinforcement learning use a single-agent random walk. However, relying on a single agent is not sufficient, and training multi-agent to solve this problem is challenging. To overcome this obstacle, we propose an Adversary and Attention Guided Knowledge Graph Reasoning based on reinforcement learning framework (\({\textbf {A}}^2{\textbf {GKGR}}\)). Utilizing the Adversarially Guided Actor-Critic (AGAC) reinforcement learning architecture, we create an adversary for the agent that learns from the agent’s historical data. The agent gains the ability to discern its prediction region from that of its opponent by leveraging Kullback-Leibler (KL) divergence. This allows for a more extensive exploration of each path within the knowledge graph, ultimately enhancing the model’s effectiveness. At the same time, we add a self-attention mechanism to trim the action space, which solves the problem of large action space of knowledge graph and improves the effectiveness and efficiency of agent action selection. We performed experiments on multiple KG reasoning benchmarks, and the results show that our method achieves good performance and has good interpretability.
Loading