Keywords: Imitation Learning, Verstile Skill Learning, Curriculum Learning
TL;DR: The paper proposes a novel algorithm for training mixture of experts models for versatile skill learning.
Abstract: Imitation learning uses data for training policies to solve complex tasks. However,
when the training data is collected from human demonstrators, it often leads
to multimodal distributions because of the variability in human actions. Most
imitation learning methods rely on a maximum likelihood (ML) objective to learn
a parameterized policy, but this can result in suboptimal or unsafe behavior due
to the mode-averaging property of the ML objective. In this work, we propose
Information Maximizing Curriculum, a curriculum-based approach that assigns
a weight to each data point and encourages the model to specialize in the data it
can represent, effectively mitigating the mode-averaging problem by allowing the
model to ignore data from modes it cannot represent. To cover all modes and thus,
enable versatile behavior, we extend our approach to a mixture of experts (MoE)
policy, where each mixture component selects its own subset of the training data
for learning. A novel, maximum entropy-based objective is proposed to achieve
full coverage of the dataset, thereby enabling the policy to encompass all modes
within the data distribution. We demonstrate the effectiveness of our approach on
complex simulated control tasks using versatile human demonstrations, achieving
superior performance compared to state-of-the-art methods.
Supplementary Material: zip
Submission Number: 13886
Loading