Unsupervised Feature Learning for Manipulation with Contrastive Domain RandomizationDownload PDF

Anonymous

Published: 15 Jun 2022, Last Modified: 22 Oct 2023SSL-RL 2021 PosterReaders: Everyone
Keywords: representation learning, robotic learning, robotic manipulation, unsupervised learning
TL;DR: New method for learning domain invariant unsupervised image representations for robotic manipulation with Contrastive Domain Randomization (CDR)
Abstract: Robotic tasks such as manipulation with visual inputs require image features that capture the physical properties of the scene, e.g., the position and configuration of objects. Recently, it has been suggested to learn such features in an unsupervised manner from simulated, self-supervised, robot interaction. The idea being that high-level physical properties are well captured by modern physical simulators, and their representation from visual inputs may transfer well to the real world. In particular, learning methods based on noise contrastive estimation have shown promising results. To robustify the simulation-to-real transfer, domain randomization (DR) was suggested for learning features that are invariant to irrelevant visual properties such as textures or lighting. In this work, however, we show that a naive application of DR to unsupervised learning based on contrastive estimation \emph{does not} promote invariance, as the loss function maximizes mutual information between the features and both the relevant and irrelevant visual properties. We propose a simple modification of the contrastive loss to fix this, exploiting the fact that we can control the simulated randomization of visual properties. Our approach learns physical features that are significantly more robust to visual domain variation, as we demonstrate using both rigid and non-rigid objects.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.11144/code)
0 Replies

Loading