Viewmaker Networks: Learning Views for Unsupervised Representation LearningDownload PDF

28 Sep 2020 (modified: 25 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: unsupervised learning, representation learning, contrastive learning, views, data augmentation
  • Abstract: Many recent methods for unsupervised representation learning involve training models to be invariant to different "views," or augmented versions of an input. However, designing these views requires considerable human expertise and experimentation, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models which learn to produce input-dependent views for contrastive learning. We train these networks jointly with the main network to produce adversarial $\ell_p$ perturbations for an input, which yields challenging yet faithful views without extensive human tuning. Our learned views enable comparable transfer accuracy to the the well-studied SimCLR augmentations when applied on CIFAR-10, while significantly outperforming baseline augmentations in speech (+9% absolute) and IMU sensor (+17% absolute) domains. We also show how viewmaker views can be combined with SimCLR views to improve robustness to common image corruptions. Our method provides a roadmap for reducing the amount of expertise and effort needed for unsupervised learning, potentially extending its benefits to a much wider set of domains.
  • One-sentence Summary: We present a new generative model that produces views for contrastive learning, matching or outperforming hand-crafted views on image, speech, and wearable sensor datasets
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
12 Replies

Loading