Stochastically Controlled Compositional Gradient for the Composition problemDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Non-convex optimisation, Composition problem, Stochastically controlled compositional gradient
TL;DR: We devise a stochastically controlled compositional gradient algorithm for the composition problem
Abstract: We consider composition problems of the form $\frac{1}{n}\sum\nolimits_{i= 1}^n F_i(\frac{1}{n}\sum\nolimits_{j = 1}^n G_j(x))$. Composition optimization arises in many important machine learning applications: reinforcement learning, variance-aware learning, nonlinear embedding, and many others. Both gradient descent and stochastic gradient descent are straightforward solution, but both require to compute $\frac{1}{n}\sum\nolimits_{j = 1}^n{G_j( x )} $ in each single iteration, which is inefficient-especially when $n$ is large. Therefore, with the aim of significantly reducing the query complexity of such problems, we designed a stochastically controlled compositional gradient algorithm that incorporates two kinds of variance reduction techniques, and works in both strongly convex and non-convex settings. The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch. Comprehensive experiments demonstrate the superiority of the proposed method over existing methods.
Original Pdf: pdf
7 Replies

Loading