Pretrained Encoders are All You NeedDownload PDF

Published: 22 Jul 2021, Last Modified: 14 Jul 2024URL 2021 PosterReaders: Everyone
Keywords: Unsupervised Representation Learning, Atari, Reinforcement Learning, Pretrained Models
Abstract: Data-efficiency and generalization are key challenges in deep learning and deep reinforcement learning as many models are trained on large-scale, domain-specific, and expensive-to-label datasets. Self-supervised models trained on large-scale uncurated datasets have shown successful transfer to diverse settings. We investigate using pretrained image representations and spatio-temporal attention for state representation learning in Atari. We also explore fine-tuning pretrained representations with self-supervised techniques, i.e., contrastive predictive coding, spatio-temporal contrastive learning, and augmentations. Our results show that pretrained representations are at par with state-of-the-art self-supervised methods trained on domain-specific data. Pretrained representations, thus, yield data and compute-efficient state representations.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/pretrained-encoders-are-all-you-need/code)
1 Reply

Loading