Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Deep Transition-Encoding Networks for Learning Dynamics
David van Dijk, Scott Gigante, Alexander Strzalkowski, Guy Wolf, Smita Krishnaswamy
Feb 12, 2018 (modified: Jun 04, 2018)ICLR 2018 Workshop Submissionreaders: everyoneShow Bibtex
Abstract:Markov processes, both classical and higher order, are often used to model dynamic processes, such as stock prices, molecular dynamics, and Monte Carlo methods. Previous works have shown that an autoencoder can be formulated as a specific type of Markov chain. Here, we propose a generative neural network known as a transition encoder, or transcoder, which learns such continuous-state dynamic processes. We show that the transcoder is able to learn both deterministic and stochastic dynamic processes on several systems. We explore a number of applications of the transcoder including generating unseen trajectories and examining the propensity for chaos in a dynamic system. Finally, we show that the transcoder can speed up Markov Chain Monte Carlo (MCMC) sampling to a convergent distribution by training it to make several steps at a time.
Keywords:markov process, autoencoder, deep learning, unsupervised learning, generative models
TL;DR:The transcoder is a generative neural network that is able to learn any stochastic or deterministic Markov process.
Enter your feedback below and we'll get back to you as soon as possible.