Temporal Difference Variational Auto-Encoder

Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, Theophane Weber

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.
  • Keywords: generative models, variational auto-encoders, state space models, temporal difference learning
  • TL;DR: Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states.
0 Replies