Bridging Input Feature Spaces Towards Graph Foundation Models

ICLR 2026 Conference Submission11411 Authors

Published: 26 Jan 2026, Last Modified: 26 Jan 2026ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Graph Foundatin Models
TL;DR: We address the lack of a shared input space in graphs. We propose ALL-IN: map node features to a shared random space and build covariance-based representations invariant to feature permutations and orthogonal transforms, enabling zero-shot transfer.
Abstract: Unlike vision and language domains, graph learning lacks a shared input space, as input features differ across graph datasets not only in semantics, but also in value ranges and dimensionality. This misalignment prevents graph models from generalizing across datasets, limiting their use as foundation models. In this work, we propose ALL-IN, a simple and theoretically grounded method that enables transferability across datasets with different input features. Our approach projects node features into a shared random space and constructs representations via covariance-based statistics, thus eliminating dependence on the original feature space. We show that the computed node-covariance operators and the resulting node representations are invariant in distribution to permutations of the input features. We further demonstrate that the expected operator exhibits invariance to general orthogonal transformations of the input features. Empirically, ALL-IN achieves strong performance across diverse node- and graph-level tasks on unseen datasets with new input features, without requiring architecture changes or retraining. These results point to a promising direction for input-agnostic, transferable graph models.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 11411
Loading