Keywords: sparse additive model, manifold regularization, bilevel optimization, robustness, learning theory
Abstract: Semi-supervised learning with manifold regularization is a classical family for learning from both labeled and unlabeled data jointly, where the key requirement is that the support of the unknown marginal distribution possesses the geometric structure of a Riemannian manifold. Typically, the Laplace-Beltrami operator-based manifold regularization can be approximated empirically by the Laplacian regularization associated with the entire training data and its corresponding graph Laplacian matrix. However, the graph Laplacian matrix depends heavily on the pre-specifying similarity metric and may result in inappropriate penalties when facing redundant and noisy input variables.
To address the above issues, this paper proposes a new Semi-Supervised Meta Additive Model (S$^2$MAM) based on a bilevel optimization scheme, which automatically identifies informative variables, updates the similarity matrix, and achieves interpretable predictions simultaneously. Theoretical guarantees are provided for S$^2$MAM, including the computing convergence and the statistical generalization bound. Experimental assessments on synthetic and real-world datasets validate the robustness and interpretability of the proposed approach.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 7767
Loading