Enhancing Embedding and Hierarchical Reward Shaping for Multi-Hop Reasoning with Reinforcement Learning
Abstract: The knowledge graph (KG) reasoning task aims to infer missing triples to complete the incomplete KG for better performance in downstream applications. For this task, multi-hop reasoning methods based on reinforcement learning (RL) exhibit improved reasoning performance and interpretability in recent research. The hierarchical reinforcement learning framework can better learn the semantic information of relations and entities to enhance the model’s learning and reasoning ability by decomposing relation-entity pairs for finer-grained reasoning. However, this also leads to the lack of information between relations and entities, which increases the difficulty of model to learn and reason. To address these issues, we propose an RL-based multi-hop reasoning framework, which consists of the Enhancing Embedding mechanism and the Hierarchical Reward Shaping mechanism (EEHRS). In this EEHRS framework, we redesign the hierarchical model to more effectively learn the semantic information of relations and entities. The enhancing embedding mechanism effectively supplements the information between relations and entities, and warms up the model training to accelerate the model convergence. According to the different reasoning requirements of the relation layer and entity layer, the hierarchical reward shaping mechanism designs a new relation layer reward to enhance the model’s learning and reasoning abilities. In addition, we also use the action space completion mechanism to add potentially missing tail entities to help the agent search for reasoning paths. Experimental results on four benchmark datasets show that our model performs almost optimally.
Loading