Visuomotor Control in Multi-Object Scenes Using Object-Aware RepresentationsDownload PDF

Published: 17 Nov 2022, Last Modified: 05 May 2023PRL 2022 PosterReaders: Everyone
Keywords: Robot Learning, Robot Vision
Abstract: Perceptual understanding of the scene and the relationship between its different components is important for successful completion of robotic tasks. Representation learning has been shown to be a powerful technique for this, but most of the current methodologies learn task specific representations that do not necessarily transfer well to other tasks. Furthermore, representations learned by supervised methods require large labeled datasets that are expensive to collect in the real world. Using self-supervised learning to obtain representations from unlabeled data can mitigate this problem. In this paper, we show the effectiveness of using self-supervised object-aware representation learning techniques for robotic tasks. Our representations are learned by observing the agent freely interacting with different parts of the environment and is queried in two different settings: (i) policy learning and (ii) object location prediction. We show that our model learns control policies in a sample-efficient manner and outperforms state-of-the-art object agnostic techniques as well as methods trained on raw RGB images. Our results show a 20% increase in performance in low data regimes (1000 trajectories) in policy training using implicit behavioral cloning (IBC). Furthermore, our method outperforms the baselines for the task of multi-object localization.
1 Reply

Loading