Fixed-Point Probing for GNN Depth Diagnostics: A Geometry-Consistent Protocol with a Patent-Citation Case Study

31 Mar 2026 (modified: 27 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep graph neural networks (GNNs) often degrade with depth, but endpoint metrics and any single probe do not reveal whether late-depth behavior reflects benign stabilization, classical oversmoothing, or a geometry-specific failure mode. Here, we read depth as a sequence of learned representations, not just as a model-size hyperparameter. We introduce fixed-point probing, a post-training protocol that keeps the probe subset fixed and the measurements geometry-consistent, so familiar signals can be read together across depths and embedding geometries. Applied to depth sweeps up to 32 layers on a patent-citation stress test, the protocol reveals geometry-dependent late-depth regimes. Euclidean models exhibit gradual class-structure degradation consistent with classical oversmoothing, while hyperbolic models enter a late-depth regime in which representation drift and graph-local roughness increase as embeddings approach the boundary. A tuned hyperbolic control matches Euclidean performance at shallow depth yet exhibits the same qualitative late-depth pattern, indicating that this effect is not explained by a trivially weak baseline. Taken together, the results point to a boundary-coupled late-depth regime in hyperbolic GNNs that is hard to isolate from endpoint metrics or from any single probe alone, but becomes visible when the probes are read jointly under a shared protocol. The protocol is the main contribution; the patent citation graph is used as a stress-test case study, not as evidence for dataset-universal claims.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Guillaume_Rabusseau1
Submission Number: 8183
Loading