The (Un)Surprising Effectiveness of Pre-Trained Vision Models for ControlDownload PDF

Published: 23 Jun 2022, Last Modified: 05 May 2023L-DOD 2022 PosterReaders: Everyone
Keywords: Representation Learning, Offline Data, Vision Models
TL;DR: We find that policies trained using frozen pre-trained visual representations can match or exceed the performance of policies trained using ground-truth states for a variety of control tasks.
Abstract: Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit the role of pre-trained visual representations (PVRs) for control, and in particular representations trained using large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains, we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that frozen pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies with imitation learning (behavior cloning). This is in spite of pre-training representations entirely on out-of-domain data from standard computer vision datasets, without even a single frame from the deployment domains.
0 Replies