Kernel-Driven Self-Supervision for Multi-View Learning Over Graphs

Published: 01 Jan 2024, Last Modified: 15 May 2025IEEECONF 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Self-supervised (SeSu) learning is a powerful subclass of unsupervised methods that aims to alleviate the need for large-scale annotated datasets to successfully train data-hungry machine learning models. To this end, SeSu methods learn contextualized embeddings from unlabeled data to efficiently tackle downstream tasks. Despite their success, most existing SeSu approaches are heuristic, and typically fail to exploit multiple views of data available for the problem at hand. This becomes particularly challenging when non-linear dependencies among multiple views or data samples exist, often emerging in applications such as learning over large-scale graphs. In this context, the present paper builds upon kernel-based learning framework to introduce principled SeSu approaches. Specifically, in lieu of the well-celebrated Representer theorem, this work posits that the optimal function for addressing the downstream problem resides in a Reproducing Kernel Hilbert space. The proposed SeSu approach then learns “low-dimensional” embeddings to approximate the feature map associated with the optimal underlying kernel. By judiciously combining the learned embeddings from multiple views of data, this paper demonstrates that a wide range of downstream problems over graphs can be efficiently solved. Numerical tests using synthetic and real graph datasets showcase the merits of the proposed approach relative to competing alternatives.
Loading