Keywords: Episodic Policy Search, Versatile Skill Learning, Hierarchical RL, Curriculum Learning
Abstract: A long-cherished vision in robotics is to equip robots with skills that match the versatility and precision of humans.
For example, when playing table tennis, a robot should be capable of returning the ball in various ways while precisely placing it at the desired location.
A common approach to model such versatile behavior is to use a Mixture of Experts (MoE) model, where each expert is a contextual motion primitive.
However, learning such MoEs is challenging as most objectives force the model to cover the entire context space, which prevents specialization of the primitives resulting in rather low-quality components.
Starting from maximum entropy reinforcement learning (RL), we decompose the objective into optimizing an individual lower bound per mixture component.
Further, we introduce a curriculum by allowing the components to focus
on a local context region, enabling the model to learn highly accurate skill representations.
To this end, we use local context distributions that are adapted jointly with the expert primitives. Our lower bound advocates an iterative addition of new components, where new components will concentrate on local context regions not covered by the current MoE.
This local and incremental learning results in a modular MoE model of high accuracy and versatility, where both properties can be scaled by adding more components on the fly.
We demonstrate this by an extensive ablation and on two challenging simulated robot skill learning tasks. We compare our achieved performance to LaDiPS and HiREPS, a known hierarchical policy search method for learning diverse skills.
Supplementary Material: zip
Poster: png
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2112.04216/code)
20 Replies
Loading