- Abstract: Exploration is a well known challenge in Reinforcement Learning. One principled way of overcoming this challenge is to find a hierarchical abstraction of the base problem and explore at these higher levels, rather than in the space of primitives. However, discovering a deep abstraction autonomously remains a largely unsolved problem, with practitioners typically hand-crafting these hierarchical control architectures. Recent work with multitask linear Markov decision processes, allows for the autonomous discovery of deep hierarchical abstractions, but operates exclusively in the offline setting. By extending this work, we develop an agent that is capable of incrementally growing a hierarchical representation, and using its experience to date to improve exploration.
- Keywords: Reinforcement learning, hierarchy, linear markov decision process, lmdl, subtask discovery, incremental
- TL;DR: We develop an agent capable of incrementally growing a hierarchical representation, and using its experience to date to improve exploration.