TopoFormer: Topology Meets Attention for Graph Learning

ICLR 2026 Conference Submission6415 Authors

16 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Topological Data Analysis, Transformers, Graph Representation Learning, Graph Classification, Molecular Property Prediction
TL;DR: TopoFormer turns graphs into short sequences of topological tokens, enabling transformers to capture multi-scale structure efficiently. It outperforms strong GNN and TDA baselines on graph learning tasks with theoretical stability guarantees.
Abstract: We introduce *TopoFormer*, a lightweight and scalable framework for graph representation learning that encodes topological structure into attention-friendly sequences. At the core of our method is *Topo-Scan*, a novel module that decomposes a graph into a short, ordered sequence of topological tokens by slicing over node or edge filtrations. These sequences capture multi-scale structural patterns, from local motifs to global organization, and are processed by a Transformer to produce expressive graph-level embeddings. Unlike traditional persistent homology pipelines, *Topo-Scan* is parallelizable, avoids costly diagram computations, and integrates seamlessly with standard deep learning architectures. We provide theoretical guarantees on the stability of our topological encodings and demonstrate state-of-the-art performance across graph classification and molecular property prediction benchmarks. Our results show that *TopoFormer* matches or exceeds strong GNN and topology-based baselines while offering predictable and efficient compute. This work opens a new path for parallelizable and unifying approaches to graph representation learning that integrate topological inductive biases into attention frameworks.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 6415
Loading