Intrinsically Motivated Graph Exploration Using Network Theories of Human Curiosity

Published: 18 Nov 2023, Last Modified: 29 Nov 2023LoG 2023 PosterEveryoneRevisionsBibTeX
Keywords: intrinsic motivations, human curiosity, reinforcement learning, graph neural networks
TL;DR: We design intrinsic motivations for graph exploration that are inspired by cognitive network science theories of human curiosity.
Abstract: Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by nodes visited in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to longer exploratory walks and larger environments than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that next-node recommendations considering curiosity are more predictive of human choices than PageRank centrality in several real-world graph environments.
Supplementary Materials: zip
Submission Type: Full paper proceedings track submission (max 9 main pages).
Agreement: Check this if you are okay with being contacted to participate in an anonymous survey.
Software: https://github.com/spatank/curiosity-graphs
Poster: png
Poster Preview: png
Submission Number: 140
Loading