Convexified Message-Passing Graph Neural Networks
TL;DR: We propose Convexified Graph Neural Networks (CGNNs), combining GNN power with convex optimization. CGNNs significantly surpass leading models by 10–40% on benchmarks in most cases and remain robust in the rest while obtaining modest gains.
Abstract: Graph Neural Networks (GNNs) are key tools for graph representation learning, demonstrating strong results across diverse prediction tasks. In this paper, we present **Convexified Message-Passing Graph Neural Networks** (CGNNs), a novel and general framework that combines the power of message-passing GNNs with the tractability of *convex* optimization. By mapping their nonlinear filters into a reproducing kernel Hilbert space, CGNNs transform training into a convex optimization problem, which is solvable efficiently and optimally by projected gradient methods. This convexity further allows CGNNs' statistical properties to be analyzed accurately and rigorously. For two-layer CGNNs, we establish rigorous generalization guarantees, showing convergence to the performance of the optimal GNN. To scale to deeper architectures, we adopt a principled layer-wise training strategy. Experiments on benchmark datasets show that CGNNs significantly exceed the performance of leading GNN models, obtaining 10–40\% higher accuracy in most cases, underscoring their promise as a powerful and principled method with strong theoretical foundations. In rare cases where improvements are not quantitatively substantial, the convex models either slightly exceed or match the baselines, stressing their robustness and wide applicability. Though over-parameterization is often employed to enhance performance in non-convex models, we show that our CGNNs framework yields shallow convex models that can surpass non-convex ones in both accuracy and model compactness.
Submission Number: 304
Loading