Efficiently Learning the Graph for Semi-supervised LearningDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: Semi-supervised learning, Data-driven Algorithm Design, Efficiency, Hyperparameter Selection, Learning Theory
TL;DR: We show how to efficiently learning the graph (Gaussian bandwidth) parameter in classic semi-supervised learning, by exploiting sparsity and conjugate gradient approximations.
Abstract: Computational efficiency is a major bottleneck in using classic graph-based approaches for semi-supervised learning on datasets with a large number of unlabeled examples. Known techniques to improve efficiency typically involve an approximation of the graph regularization objective, but suffer two major drawbacks – first the graph is assumed to be known or constructed with heuristic hyperparameter values, second they do not provide a principled approximation guarantee for learning over the full unlabeled dataset. Building on recent work on learning graphs for semi-supervised learning from multiple datasets for problems from the same domain, and leveraging techniques for fast approximations for solving linear systems in the graph Laplacian matrix, we propose algorithms that overcome both the above limitations. We show a formal separation in the learning-theoretic complexity of sparse and dense graph families. We further show how to approximately learn the best graphs from the sparse families efficiently using the conjugate gradient method. Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions. Our online learning results are stated generally, and may be useful for approximate and efficient parameter tuning in other problems. We implement our approach and demonstrate significant (~10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies