Controllable Concept Transfer of Intermediate RepresentationsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: concept transfer, transfer learning, transferrable representations
TL;DR: We propose a novel approach for controlling the transfer of user-determined semantic concepts from source to target task
Abstract: With the proliferation of large pre-trained models in various domains, transfer learning has gained prominence where intermediate representations from these models can be leveraged to train better (target) task-specific models, with possibly limited labeled data. Although transfer learning can be beneficial in many cases, it can also transfer undesirable information to target tasks that may severely curtail its performance in the target domain or raise ethical concerns related to privacy and/or fairness. In this paper, we propose a novel approach for controlling the transfer of user-determined semantic concepts (viz. color, glasses, etc.) in intermediate source representations to target tasks without the need to retrain the source model which can otherwise be expensive or even infeasible. Notably, this is also a bigger challenge than blocking concepts in the input representation as a given intermediate source representation is biased towards the source task it was originally trained to solve, thus possibly further entangling the desired concepts. We qualitatively and quantitatively evaluate our approach in the visual domain showcasing its efficacy for classification and generative source models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
5 Replies

Loading