CuriousWalk: Enhancing Multi-Hop Reasoning in Graphs with Random Network Distillation

Published: 28 Oct 2023, Last Modified: 21 Dec 2023NeurIPS 2023 GLFrontiers Workshop PosterEveryoneRevisionsBibTeX
Keywords: Multi-Hop Reasoning, Reinforcement Learning, Random Network Distillation
TL;DR: Seeks to solve the issues of sparsity and improves exploration through intrinsic rewards in knowledge graphs.
Abstract: Structured knowledge bases in the forms of graphs often suffer from incompleteness and inaccuracy in representing information. One popular method of densifying graphs involves constructing a reinforcement learning agent that learns to traverse entities and relations in a sequential way from a query entity, according to a query relation until it reaches the desired answer entity. However, these agents are often limited by sparse reward structures of the environment, as well as their inability to find diverse paths from the question to the answer entities. In this paper, we attempt to address these issues by augmenting the agent with intrinsic rewards which can help in exploration as well as offering meaningful feedback at intermediate steps to push the agent in the right direction.
Submission Number: 52
Loading