Keywords: Multi-level Markov decision processes, hierarchical reinforcement learning, transfer learning, curriculum learning, meta-reinforcement learning, skill, higher-order function, divide-and-conquer, dynamic programming, sparse reward.
TL;DR: We propose a framework for multi-difficulty, skill-based curricula built on multi-level Markov decision processes (MDPs), enabling efficient MDP decomposition/solving and skill transfer across MDPs and across levels within/between curricula.
Abstract: We consider problems in sequential decision making with natural multi-level structure, where sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure has remained a longstanding challenge; we describe an efficient multi-level procedure for repeatedly compressing Markov decision processes (MDPs), wherein a parametric family of policies at one level is treated as a action in the compressed MDPs at higher levels, while preserving the semantic meanings and structure of the original MDP, and mimicking the natural logic to address a complex MDP. Higher-level MDPs are themselves independent, deterministic MDPs, and may be solved using existing algorithms. %As a byproduct, spatial or temporal scales may be coarsened at higher levels, making it more efficient to find long-term optimal policies.
The multi-level representation delivered by this procedure decouples sub-tasks from each other and usually greatly reduces unnecessary stochasticity and the policy search space, leading to fewer iterations and computations when solving the MDPs.
A second fundamental aspect of this work is that these multi-level decompositions plus the factorization of policies into embeddings (problem-specific) and skills (including higher-order functions) yield new transfer opportunities of skills across different problems and different levels.
This whole process is framed within curriculum learning, wherein a teacher organizes the student agent's learning process in a way that gradually increases the difficulty of tasks and ensures the abundance of transfer opportunities across different MDPs and different levels within/across curricula.
The consistency of this new, general framework and its benefits brought by these multi-level structures and abundant transfer learning opportunities can in general be justified under mild assumptions. %Mathematically, beyond MDP homogenization, the framework links to multi-index models, tensor-product structure behind action sets, and function composition, with potential applications such as multi-level proving tactics for automated theorem proving.
%This whole methodology is general enough, both in terms of direct extension to continuous case or environments needing to be explored, combination with current reinforcement learning algorithms or natural language, or in terms of application domains.
We demonstrate abstraction, transferability, and curriculum learning in some illustrative domains, including a more complex version of the MazeBase example.
Primary Area: reinforcement learning
Submission Number: 19506
Loading