All In: Bridging Input Feature Spaces Towards Graph Foundation Models

Published: 23 Sept 2025, Last Modified: 17 Nov 2025UniReps2025EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract Track
Keywords: Graph Neural Networks, Graph Foundation Models, Bridging Input Spaces
TL;DR: We develop a model that enables transfer across diverse datasets with different input features by projections and theoretically understood node-covariance operators.
Abstract: Graph learning is hindered by the lack of a shared input space, as features vary in semantics and dimensionality across datasets, preventing models from generalizing. We propose ALL-IN, a method that enables transferability across these diverse input feature spaces. Our approach projects node features into a shared random space and builds representations from covariance-based statistics, removing dependence on the original feature space. Theoretically, we show that the resulting node representations are invariant in distribution to input feature permutations and the expected operator is invariant to orthogonal transformations of the input features. Empirically, ALL-IN achieves strong performance on unseen datasets with new features across various tasks without requiring retraining , pointing to a promising direction for truly input-agnostic and transferable graph models.
Submission Number: 77
Loading