Improving performance on the ManiSkill Challenge via Super-convergence and Multi-Task LearningDownload PDF

Published: 27 Apr 2022, Last Modified: 05 May 2023ICLR 2022 GPL PosterReaders: Everyone
Keywords: Behavior Cloning, Super-convergence, Multi-task Learning
TL;DR: We show how learning rate scheduling improved model training and, along with Multi-task learning helped us improve performance on the ManiSkill Challenge.
Abstract: We present key aspects of our approach to the ManiSkill Challenge, where we used Imitation Learning on the provided demonstration dataset to let a robot learn how to manipulate interactive objects. We present what is to our knowledge the first application of super-convergence via learning rate scheduling to Imitation Learning and robotics, enabling better policy performance with a training time reduced by almost an order of magnitude. We also present how we used Multi-task Learning to reach a top score on unseen object of 1 task of the challenge. It shows that the strategy can unlock generalization performance on some tasks, corroborating other work in the field. We also show that simple data augmentation strategies can help push the model performance further.
1 Reply