Keywords: Physics guided machine learning; Graph neural network; Boundary conditions
Abstract: Topological flux prediction (TFP) aiming to model spatiotemporal fluid transport over networked systems, has inspired and lent itself to various predictive methods. Whereas Graph Neural Networks (GNNs) demonstrate successes in related prediction tasks, recent studies suggest that they can underperform even simple baselines in TFP, concluding that GNNs may be ill-suited for such problems. In this paper, we re-examine this claim by dissecting the learning behavior of GNNs on fluid networks, decoupling the roles of boundary nodes,
which regulate total influx, from interior nodes. We find that the dominant prediction errors arise at boundary nodes, which do not necessarily imply a fundamental limitation in the expressive power of GNNs. We interpret this phenomenon from a dynamical-systems perspective, arguing that GNNs incur substantial boundary losses mainly due to the lack of explicit modeling of boundary conditions. To compensate this information deficit, we propose a novel ghost-TFP framework, which learns ghost node proxies with an implicit solver to capture boundary-aware representations. Experimental results on two real datasets show that our method ghost-TFP improves standard GNNs by reducing the average MSE by 8.35\% and 5.0\%, and the boundary node MSE by 11.2\% and 7.1\%, respectively. For efficiency, we further devise an explicit solver that learns inverse operators which, depending on the underlying GNN backbone, can accelerate inference by $2\times$ on both datasets.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21682
Loading