Keywords: Convergence analysis; nonconvex optimization; matrix volume
TL;DR: We show that a family of nonconvex optimization problems related to unsupervised learning and representation learning can be solved by linearized ADMM with convergence guarantee.
Abstract: We present an algorithm that aims at solving a family of nonconvex optimization problem with convergence guarantee to a global optimum. This family of nonconvex optimization problems share the formulation of maximizing the volume of a matrix subject to linear constraints. Problems of this form have found a lot of applications in unsupervised learning and representation learning, especially if identifiability of the latent representation is important for the task. Specific examples based on the types of constraints include bounded component analysis, sparse component analysis (complete dictionary learning), nonnegative component analysis (nonnegative matrix factorization), and admixture component analysis, to name a few. Computationally, the problem is hard because of the nonconvex objective. An algorithm based on linearized ADMM is proposed for these problems. Although a similar algorithm has appeared in the literature, we note that a small modification has to be made in order to guarantee that the algorithm provably converges even for convex problems. Then we present the main contribution of this work that is convergence guarantee to a global optimum at a sublinear rate. We do assume some mild conditions on the initialization, but our numerical experiments indicate that these initialization conditions are very easy to satisfy.
Supplementary Material: pdf
Primary Area: optimization
Submission Number: 13853
Loading