Learning Deep Models: Critical Points and Local OpennessDownload PDF

11 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: Non-convex Optimization, Local/Global Equivalence, Training Deep Models, Local Open Mappings
Abstract: In this paper we present a unifying framework to study the local/global optima equivalence of the optimization problems arising from training non-convex deep models. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results, our result requires no assumption on the target data matrix Y, and input data matrix X. 2) Develop almost complete characterization of the local/global optima equivalence of multi-layer linear neural networks. 3) Show global/local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions.
1 Reply

Loading