TL;DR: The paper theoretically links the Lottery Ticket Hypothesis to GNN expressivity and moreover shows sparse, expressive initializations improve convergence and generalization, while poor pruning can cause irrecoverable expressivity loss.
Abstract: The lottery ticket hypothesis (LTH) is well-studied for convolutional neural networks but has been validated only empirically for graph neural networks (GNNs), for which theoretical findings are largely lacking. In this paper, we identify the expressivity of sparse subnetworks, i.e. their ability to distinguish non-isomorphic graphs, as crucial for finding winning tickets that preserve the predictive performance.
We establish conditions under which the expressivity of a sparsely initialized GNN matches that of the full network, particularly when compared to the Weisfeiler-Leman test, and in that context put forward and prove a Strong Expressive Lottery Ticket Hypothesis. We subsequently show that an increased expressivity in the initialization potentially accelerates model convergence and improves generalization. Our findings establish novel theoretical foundations for both LTH and GNN research, highlighting the importance of maintaining expressivity in sparsely initialized GNNs. We illustrate our results using examples from drug discovery.
Lay Summary: Graph Neural Networks (GNNs) analyze complex relational data, powering breakthroughs in areas like drug discovery and social media analysis. A key idea -- called the Lottery Ticket Hypothesis -- claims that large neural networks contain smaller subnetworks that alone can achieve top performance. We extend this idea to GNNs by identifying the role of "expressivity," the network’s ability to distinguish between different graph structures, in finding these subnetworks. Our theoretical analysis proves that maintaining expressivity is crucial when pruning networks. Empirically, we show expressive subnetworks perform significantly better after training, even with fewer parameters. This not only provides novel insights for theoretical research but also practical guidelines for building efficient, powerful GNNs that reliably differentiate critical structures, such as toxic versus non-toxic molecules.
Link To Code: https://github.com/lorenz0890/wl2025lottery
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Lottery Ticket Hypothesis, Graph Neural Networks
Submission Number: 2513
Loading