Abstract: Graph Neural Networks (GNNs) have achieved notable success in spatiotemporal modeling across diverse application domains. However, their efficacy in flux prediction (FP), where the goal is to model spatiotemporal fluid transport over networked physical systems, remains contentious. Recent studies report that GNNs can underperform even simple baselines in FP settings, leading to a claim that GNNs may be intrinsically ill-suited for such tasks.
In this paper, we revisit this claim by dissecting the GNN learning dynamics on fluid transport networks, with an emphasis on its boundary regions. Specifically, we decompose the graph into boundary and interior nodes, where boundary nodes regulate the total influx and are the primary interface with external forcing. Our empirical and theoretical analyses reveal that dominant prediction errors concentrate at boundary nodes. From a dynamical-systems perspective, we interpret the boundary errors as the consequence of unmodeled external forcing, which causes degraded performance on boundaries. We therefore hypothesize that the observed performance degradation of GNNs was not caused by their expressivity; rather, it arises from the deficit of explicit external forcing modeling during training.
To validate this hypothesis, we propose \myalg, which learns ghost-node proxies to approximate unmodeled external forcing. Each boundary node is augmented with an associated ghost node that represents the latent forcing. This yields a ghost--boundary--interior coupled system, which we solve using an implicit fixed-point formulation. The resulting equilibrium \emph{jointly} infers the external forcing and propagates it into the interior. This enriches standard GNN backbones with boundary-consistent representations while preserving interior message passing. Extensive experiments on two real-world fluid network datasets demonstrate that \myalg\ improves standard GNNs by reducing average MSE by 8.4\% and 5.0\%, and boundary-node MSE by 11.2\% and 7.1\%, respectively. For computational efficiency, we further introduce an explicit inverse-operator solver that amortizes the fixed-point inference and accelerates inference by up to $2\times$, depending on the backbone architecture.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Xiao_Luo3
Submission Number: 7656
Loading