Detach and Adapt: Learning Cross-Domain Disentangled Deep RepresentationDownload PDF

25 Jan 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without ob- serving ground truth annotation for the training data. To ad- dress this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our dis- entanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.
0 Replies

Loading