Hierarchical Subtask Discovery with Non-Negative Matrix FactorizationDownload PDF

15 Feb 2018 (modified: 23 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework. The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks. In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain. We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains. Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.
TL;DR: We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework.
Keywords: Reinforcement Learning, Hierarchy, Subtask Discovery, Linear Markov Decision Process
9 Replies

Loading