Performance guarantees for transferring representationsDownload PDF

28 Mar 2024 (modified: 14 Mar 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments.
TL;DR: We develop sufficient conditions for successfully transferring representations between tasks and present an application to weight sharing in neural networks.
Keywords: Theory, Transfer Learning
Conflicts: anu.edu.au, data61.csiro.au, cmu.edu
5 Replies

Loading