Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient EstimatorDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 OralReaders: Everyone
Keywords: gumbel, softmax, gumbel-softmax, straight-through, straightthrough, rao, rao-blackwell
Abstract: Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance. To counteract this, modern estimators either introduce bias, rely on multiple function evaluations, or use learned, input-dependent baselines. Thus, there is a need for estimators that require minimal tuning, are computationally cheap, and have low mean squared error. In this paper, we show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization without increasing the number of function evaluations. This provably reduces the mean squared error. We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
One-sentence Summary: We reduce the variance of the straight-through Gumbel-Softmax estimator to improve its performance.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![Papers with Code](/images/pwc_icon.svg) 5 community implementations](https://paperswithcode.com/paper/?openreview=Mk6PZtgAgfq)
Data: [ListOps](https://paperswithcode.com/dataset/listops)
13 Replies

Loading