Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
Mathias Berglund, Tapani Raiko
Dec 23, 2013 (modified: Dec 23, 2013)ICLR 2014 workshop submissionreaders: everyone
Decision:submitted, no decision
Abstract:Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
Enter your feedback below and we'll get back to you as soon as possible.