Nonparametric Teaching for Graph Property Learners

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Inferring properties of graph-structured data, *e.g.*, the solubility of molecules, essentially involves learning the implicit mapping from graphs to their properties. This learning process is often costly for graph property learners like Graph Convolutional Networks (GCNs). To address this, we propose a paradigm called Graph Nonparametric Teaching (GraNT) that reinterprets the learning process through a novel nonparametric teaching perspective. Specifically, the latter offers a theoretical framework for teaching implicitly defined (*i.e.*, nonparametric) mappings via example selection. Such an implicit mapping is realized by a dense set of graph-property pairs, with the GraNT teacher selecting a subset of them to promote faster convergence in GCN training. By analytically examining the impact of graph structure on parameter-based gradient descent during training, and recasting the evolution of GCNs—shaped by parameter updates—through functional gradient descent in nonparametric teaching, we show *for the first time* that teaching graph property learners (*i.e.*, GCNs) is consistent with teaching structure-aware nonparametric learners. These new findings readily commit GraNT to enhancing learning efficiency of the graph property learner, showing significant reductions in training time for graph-level regression (-36.62\%), graph-level classification (-38.19\%), node-level regression (-30.97\%) and node-level classification (-47.30\%), all while maintaining its generalization performance.
Lay Summary: Graphs, like molecular structures or social networks, hold valuable information, but extracting useful insights from these complex structures can be time-consuming and computationally expensive. Current methods, such as Graph Convolutional Networks (GCNs), require extensive training, which slows down progress in fields like drug discovery or network analysis. We developed a novel approach called Graph Neural Teaching (GraNT), which reimagines the learning process. Instead of training GCNs on all available data, GraNT strategically selects the most informative examples to teach the model faster and more effectively. By analyzing how graph structures influence training and leveraging a theoretical framework called nonparametric teaching, GraNT optimizes the learning process without sacrificing accuracy. GraNT significantly reduces training time—up to 47% in some tasks—while maintaining high performance. This means faster and more efficient predictions for real-world problems, from identifying drug properties to detecting patterns in social networks. By making graph-based learning more efficient, GraNT opens the door to broader applications and quicker advancements in science and technology.
Link To Code: https://github.com/chen2hang/GraNT_NonparametricTeaching
Primary Area: General Machine Learning->Everything Else
Keywords: Nonparametric Teaching, Graph Property Learning, Functional Gradient Descent
Submission Number: 4554
Loading