Learning Lower Bounds for Graph Exploration With Reinforcement LearningDownload PDF

Published: 12 Dec 2020, Last Modified: 05 May 2023LMCA2020 PosterReaders: Everyone
TL;DR: We use reinforcement learning to study lower bounds for the competitive ratio of the greedy explorer in online graph exploration.
Abstract: We explore the usage of reinforcement learning for theoretical computer science. Reinforcement learning has shown to find solutions in challenging domains such as Chess or Go. Theoretical problems, such as finding the worst possible input for an algorithm come with even more vast, combinatorial search spaces. In this paper, we look at the example of online graph exploration. Here we want to find graphs that yield a high competitive ratio for a greedy explorer. The search space consists of having every edge being either present or absent. Given there are quadratically many possible edges in a graph and each subset of edges is a possible solution, this yields unfeasibly large search spaces even for few nodes. We show experimentally how clever constraints can keep such search spaces manageable. As a result, we can learn graphs that resemble those known from literature and even improve them to yield higher competitive ratios.
1 Reply