Submission Type: Short paper (4 pages)
Keywords: relational deep learning, representation learning, foundation models
TL;DR: RELATE is a schema-agnostic encoder that plugs into any relational deep learning model
Abstract: Relational multi-table data is common in domains such as e-commerce, healthcare, and scientific research, and can be naturally represented as heterogeneous temporal graphs with multi-modal node attributes. Existing graph neural networks (GNNs) rely on schema-specific feature encoders, requiring separate modules for each node type and feature column, which hinders scalability and parameter sharing. We introduce $\textbf{RELATE}$ (Relational Encoder for Latent Aggregation of Typed Entities), a schema-agnostic, plug-and-play feature encoder that can be used with any general purpose GNN. RELATE employs shared modality-specific encoders for categorical, numerical, textual, and temporal attributes, followed by a Perceiver-style cross-attention module that aggregates features into a fixed-size, permutation-invariant node representation. We evaluate RELATE on ReLGNN and HGT in the RelBench benchmark, where it achieves performance within 3\% of schema-specific encoders while reducing parameter counts by up to 5x. This design supports varying schemas and enables multi-dataset pretraining for general-purpose GNNs, paving the way toward foundation models for relational graph data.
Published Paper Link: https://openreview.net/pdf?id=RdHEwMuzba
Relevance Comments: RELATE introduces a schema-agnostic encoder for multi-modal heterogeneous temporal graphs on relational databases. RELATE replaces per-column stacks with shared modality encoders and a Perceiver-style cross-attention bottleneck that produces a fixed dimension representation regardless of schema. This aligns strongly with the scalable, multimodal representation learning on relational databases this workshop targets. On the Relbench benchmark, RELATE matches schema-specific baselines within ~3% while cutting parameters by up to 5x, paving a practical path to multi-dataset pretraining and graph foundation models; this makes it directly relevant to sessions on transfer across schemas, efficiency, and general-purpose GNNs.
Published Venue And Year: NeurIPS 2025, New Perspectives on Graph Machine Learning Workshop
Submission Number: 46
Loading