Learning Useful Representations for Shifting Tasks and Distributions Download PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: rich representation learning, supervised transfer learning, self-supervised transfer learning, few shot learning, out-of-distribution robust learning
Abstract: Representation learning in deep models usually happens as a side effect of minimizing the expected risk using back-propagation. However, one of the challenges of modern deep learning is the increasingly recognized need to deal with multiple tasks and varying data distributions, as illustrated, for instance, by the value of transfer learning and the risks of shortcut learning. Are the representations learned by back-propagation up to the task? This work presents and empirically evaluates two methods that combine the feature extractors learned during multiple training episodes and construct a representation that is richer than those usually obtained through a single expected risk minimization episode. Comprehensive experiments in supervised transfer learning, self-supervised transfer learning, few-shot learning, and out-of-distribution robust learning scenarios, show that such rich representations can match and often exceed the performance of those obtained by training an equivalently sized network, with usually a far less computational burden.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:2212.07346/code)
22 Replies

Loading