Viewmaker Networks: Learning Views for Unsupervised Representation LearningDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: unsupervised learning, self-supervised, representation learning, contrastive learning, views, data augmentation
Abstract: Many recent methods for unsupervised representation learning train models to be invariant to different "views," or distorted versions of an input. However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models that learn to produce useful views from a given input. Viewmakers are stochastic bounded adversaries: they produce views by generating and then adding an $\ell_p$-bounded perturbation to the input, and are trained adversarially with respect to the main encoder network. Remarkably, when pretraining on CIFAR-10, our learned views enable comparable transfer accuracy to the well-tuned SimCLR augmentations---despite not including transformations like cropping or color jitter. Furthermore, our learned views significantly outperform baseline augmentations on speech recordings (+9 points on average) and wearable sensor data (+17 points on average). Viewmaker views can also be combined with handcrafted views: they improve robustness to common image corruptions and can increase transfer performance in cases where handcrafted views are less explored. These results suggest that viewmakers may provide a path towards more general representation learning algorithms---reducing the domain expertise and effort needed to pretrain on a much wider set of domains. Code is available at https://github.com/alextamkin/viewmaker.
One-sentence Summary: We present a new generative model that produces views for self-supervised learning, matching or outperforming hand-crafted views on image, speech, and wearable sensor datasets
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) alextamkin/viewmaker](https://github.com/alextamkin/viewmaker)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [COCO](https://paperswithcode.com/dataset/coco), [Fashion-MNIST](https://paperswithcode.com/dataset/fashion-mnist), [LibriSpeech](https://paperswithcode.com/dataset/librispeech), [MNIST](https://paperswithcode.com/dataset/mnist), [PAMAP2](https://paperswithcode.com/dataset/pamap2), [Speech Commands](https://paperswithcode.com/dataset/speech-commands), [VoxCeleb1](https://paperswithcode.com/dataset/voxceleb1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.07432/code)
12 Replies

Loading