Deep Learning on Graphs: Theory, Models, Algorithms and Applications

Published: 01 Jan 2021, Last Modified: 15 May 2025undefined 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Structures or graphs are pervasive in our lives. Although deep learning has achieved tremendous success in many engineering fields, it is still limited in handling various structured data. More importantly, humans have the remarkable ability to learn discrete structures from data to facilitate explainability and generalization, which current deep learning systems can not parallel. In this thesis, I present our work, which revolves around how to improve deep learning for graphs from aspects of theory, models, algorithms, and applications.We first provide a theoretical investigation on graph neural networks (GNNs), an increasingly popular class of deep neural networks that are promising in learning with graphs. We establish PAC-Bayes generalization bounds for common GNNs and show the connection to the results for regular neural networks. In the second part, we look at the limitations of current GNNs and introduce novel models that effectively capture multi-scale dependences in graph convolutions and efficiently propagate messages using asynchronous schedules. Moreover, we study the graph generation problem and introduce a new deep auto-regressive GNN that can generate high-quality graphs in a scalable and fast manner. In the third part, we revisit an implicit gradient based learning algorithm and propose new variants that are effective in training message-passing GNNs. At last, we demonstrate our efforts of customizing GNNs to achieve impressive performances in many practical application domains, including computer vision, reinforcement learning, and probabilistic inference. I summarize the thesis and discuss promising future directions in the final chapter.
Loading