CAT: Curvature-Adaptive Transformers for Geometry-Aware Learning

Published: 24 Sept 2025, Last Modified: 25 Nov 2025NEGEL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Transformers, Non-Euclidean Geometry, Geometric Deep Learning
TL;DR: Curvature-Adaptive Transformers (CAT) dynamically route tokens across Euclidean, hyperbolic, and spherical attention branches, boosting relational reasoning performance with minimal overhead and interpretable geometry selection.
Abstract: Transformers achieve strong performance across diverse domains but implicitly assume Euclidean geometry in their attention mechanisms, limiting their effectiveness on data with non-Euclidean structure. While recent extensions to hyperbolic and spherical spaces show promise for hierarchical and cyclical patterns, respectively, they require committing to a single geometry a priori, reducing flexibility when data exhibits mixed geometric properties. We introduce the Curvature-Adaptive Transformer (CAT), a novel architecture that dynamically learns per-token routing across three geometric attention branches through a lightweight, differentiable gating mechanism. Unlike fixed-geometry approaches, CAT enables adaptive geometric specialization, routing tokens to the appropriate curvature based on their local relational structure. The routing network provides interpretable curvature preferences while each branch employs geometry-specific operations optimized for its respective manifold. On knowledge graph completion benchmarks (FB15k- 237, WN18RR), CAT achieves approximately 10% improvements in MRR and Hits@10 over fixed-geometry baselines with minimal overhead (5% parameter increase, comparable inference time). These results demonstrate that learned geometric adaptation outperforms any single fixed geometry for complex relational reasoning, establishing CAT as a scalable and interpretable foundation for mixture-of-geometry architectures across language, vision, and multimodal domains.
Submission Number: 37
Loading