Abstract: Recommender systems often suffer from data sparsity, particularly when user interactions within a single domain are limited. Cross-domain recommender systems (CDRSs) address this challenge by transferring knowledge across related domains. However, existing approaches face two key limitations: 1) intra-domain noise, where skewed or unreliable interactions degrade representation quality and 2) negative transfer, where misaligned knowledge from the source domain harms target-domain performance. To tackle these issues, we propose GCLD-CDR, a novel cross-domain recommendation framework that integrates graph-based contrastive learning (CL) with diffusion-based knowledge transfer. To enhance intra-domain learning, GCLD-CDR incorporates two complementary augmentation modules: a feature perturbation generator that introduces controlled noise to improve representation diversity, and a denoising generator that prunes unreliable graph edges to refine structural signals. To mitigate negative transfer, we design a diffusion-based transfer mechanism that progressively perturbs source-domain user representations via a Gaussian diffusion process. A neural decoder then reverses this process, selectively recovering task-relevant information while filtering out noise and misaligned signals. Extensive experiments on real-world datasets demonstrate that GCLD-CDR consistently outperforms state-of-the-art baselines, underscoring its potential for advancing practical and trustworthy recommender systems.
External IDs:dblp:journals/tsmc/DoZZL26
Loading