Abstract: Graph Neural Networks (GNNs) refine embeddings by recursively aggregating information along user-item interaction edges, effectively revealing collaborative effects. Despite their empirical efficacy, GNNs struggle with data sparsity and noisy connections inherent in collaborative filtering scenarios due to their messaging mechanism. Transformer has emerged as a promising graph-structured data encoder, garnering significant attention primarily due to its global attention mechanism, which captures all-pair influences beyond neighboring nodes and thus alleviates the deficiencies of GNNs. However, directly adopting vanilla self-attention in recommendation systems does not always yield the desired results, possibly because classic ID embeddings fail to provide sufficient supervisory signals. To this end, we propose a novel topology-guided graph transformer, named ToBE, which relies solely on position encodings (PEs) that reflect the topological characteristics of nodes to measure attention scores. Furthermore, we derive two types of PEs specifically for bipartite graphs, based on random walk and Laplacian eigenmaps. Extensive experiments on six public benchmarks demonstrate that ToBE outperforms several state-of-the-art models. Further ablation studies validate the rationale and effectiveness of designed self-attention driven purely by topological PEs. The source code is available at https://github.com/liulizhi1996/ToBE.
External IDs:doi:10.1007/978-981-95-4091-4_39
Loading