Temporal Difference Variational Auto-EncoderDownload PDF

27 Sept 2018, 22:38 (modified: 28 Mar 2019, 16:51)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: generative models, variational auto-encoders, state space models, temporal difference learning
TL;DR: Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states.
Abstract: To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.
8 Replies