REVISTING NEGATIVE TRANSFER USING ADVERSARIAL LEARNINGDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks. We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features. We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting. Finally, empirical results on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner.
Keywords: Negative Transfer, Adversarial Learning
TL;DR: We look at negative transfer from a domain adaptation point of view to derive an adversarial learning algorithm.
Data: [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011)
5 Replies

Loading