Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EMDownload PDF

Published: 31 Oct 2022, Last Modified: 13 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: optimization, mirror descent, measure spaces, sinkhorn's algorithm, expectation-maximization
TL;DR: We derive the convergence of mirror descent for relatively smooth and strongly convex pairs of functionals over measure spaces, applying it to Sinkhorn's primal iterations and the EM algorithm through th KL.
Abstract: Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence.
Supplementary Material: pdf
12 Replies

Loading