Bridging the Divide: End-to-End Sequence–Graph Learning

Published: 23 Sept 2025, Last Modified: 21 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: sequence-graph models, user event modeling
TL;DR: We propose Bridge, an end-to-end model that jointly learns sequences and graphs. It outperforms static GNNs and temporal graph models on friendship prediction and fraud detection.
Abstract: Many real–world datasets are both sequential and relational: each node carries an event sequence while edges encode interactions. Existing methods in sequence modeling and graph modeling often neglect one modality or the other. We argue that sequences and graphs are not separate problems but complementary facets of the same dataset, and should be learned jointly. We introduce BRIDGE, a unified end-to-end architecture that couples a sequence encoder with a GNN under a single objective, allowing gradients to flow across both modules and learning task-aligned representations. To enable fine-grained token-level message passing among neighbors, we add TokenXAttn, a token-level cross-attention layer that passes messages between events in neighboring sequences. Across two settings, friendship prediction (Brightkite) and fraud detection (Amazon), BRIDGE consistently outperforms static GNNs, temporal graph methods, and strong sequence-only baselines on ranking and classification metrics.
Submission Number: 34
Loading