Generative Adversarial Learning of Markov ChainsDownload PDF

29 Mar 2024 (modified: 18 Feb 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: We investigate generative adversarial training methods to learn a transition operator for a Markov chain, where the goal is to match its stationary distribution to a target data distribution. We propose a novel training procedure that avoids sampling directly from the stationary distribution, while still capable of reaching the target distribution asymptotically. The model can start from random noise, is likelihood free, and is able to generate multiple distinct samples during a single run. Preliminary experiment results show the chain can generate high quality samples when it approaches its stationary, even with smaller architectures traditionally considered for Generative Adversarial Nets.
TL;DR: We can train Markov Chains with an adversarial network.
Keywords: Deep learning, Unsupervised Learning
Conflicts: stanford.edu, tsinghua.edu.cn, duke.edu
5 Replies

Loading