## GLAD: Learning Sparse Graph Recovery

25 Sep 2019 (modified: 11 Mar 2020)ICLR 2020 Conference Blind SubmissionReaders: Everyone
• Original Pdf: pdf
• Keywords: Meta learning, automated algorithm design, learning structure recovery, Gaussian graphical models
• TL;DR: A data-driven learning algorithm based on unrolling the Alternating Minimization optimization for sparse graph recovery.
• Abstract: Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an $\ell_1$ regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.