Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Variational Recurrent Adversarial Deep Domain Adaptation
Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, Yan Liu
Nov 04, 2016 (modified: Mar 09, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies in multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
TL;DR:We propose Variational Recurrent Adversarial Deep Domain Adaptation approach to capture and transfer temporal latent dependencies in multivariate time-series data
Keywords:Deep learning, Transfer Learning
Conflicts:usc.edu, nyu.edu, nec-labs.com
Enter your feedback below and we'll get back to you as soon as possible.