Hgformer: Hyperbolic Graph Transformer for Collaborative Filtering

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recommender systems are increasingly spreading to different areas like e-commerce or video streaming to alleviate information overload. One of the most fundamental methods for recommendation is Collaborative Filtering (CF), which leverages historical user-item interactions to infer user preferences. In recent years, Graph Neural Networks (GNNs) have been extensively studied to capture graph structures in CF tasks. Despite this remarkable progress, local structure modeling and embedding distortion still remain two notable limitations in the majority of GNN-based CF methods. Therefore, in this paper, we propose a novel Hyperbolic Graph Transformer architecture, to tackle the long-tail problems in CF tasks. Specifically, the proposed framework is comprised of two essential modules: 1) Local Hyperbolic Graph Convolutional Network (LHGCN), which performs graph convolution entirely in the hyperbolic manifold and captures the local structure of each node; 2) Hyperbolic Transformer, which is comprised of hyperbolic cross-attention mechanisms to capture global information. Furthermore, to enable its feasibility on large-scale data, we introduce an unbiased approximation of the cross-attention for linear computational complexity, with a theoretical guarantee in approximation errors. Empirical experiments demonstrate that our proposed model outperforms the leading collaborative filtering methods and significantly mitigates the long-tail issue in CF tasks. Our implementations are available in \url{https://github.com/EnkiXin/Hgformer}.
Lay Summary: In recent years, recommender systems have been adopted in many areas of daily life, aiming to filter the most useful information for users. Yet the field now faces two key challenges: (1) while the recommender systems can easily capture features of popular items, they struggle to grasp those of less popular ones; and (2) they have difficulty modeling global information. Uptill now, few existing studies tackle both issues at once. In this work, we propose a unified solution that does: we incorporate spatial characteristics by combining hyperbolic geometry with Graph Transformer techniques for recommendation. Experiments across multiple datasets show our method surpasses current baselines, and detailed analyses explain why it works.
Link To Code: https://github.com/EnkiXin/Hgformer
Primary Area: Deep Learning->Attention Mechanisms
Keywords: Collaborative Filtering, Hyperbolic Representation Learning, Graph Transformer
Submission Number: 1559
Loading