Keywords: Motion planning, robotics, learning abstractions, bilevel planning, hierarchical planning
TL;DR: This paper presents a framework that uses deep learning to learn hierarchical state and action abstractions and introduces a novel multi-source multi-directional hierarchical planning algorithm to efficiently use learned abstractions.
Abstract: State and action hierarchies have been found to be invaluable in long-horizon robot motion planning. However, approaches for learning such hierarchies tend to require extensive experience on the target task, target environment and/or deterministic dynamics. This paper considers the problem of learning how to create state and action abstractions for a known robot with stochastic low-level controllers in previously unseen environments. We present a novel and robust approach for learning to create an abstract, searchable state space, high-level options, as well as low-level option policies in this setting. We show that this approach facilitates efficient hierarchical planning in stochastic settings with strong guarantees of composability and completeness for holonomic robots. Extensive empirical analysis with holonomic as well as non-holonomic robots on a total of $60$ different combinations unseen environments and tasks shows that the resulting approach is broadly applicable, scales well and enables effective learning and transfer even in tasks with long horizons where baselines are unable to learn.
Submission Number: 24
Loading