ABXI: Invariant Interest Adaptation for Task-Guided Cross-Domain Sequential Recommendation

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: User modeling, personalization and recommendation
Keywords: Invariant Learning, Cross-Domain Sequential Recommendation, Sequential Recommendation, Low-Rank Adaptation
TL;DR: We propose a CDSR model, ABXI, aiming to address the prediction mismtach issue and conduct effective and efficient invariant interests adaptation.
Abstract: Cross-Domain Sequential Recommendation (CDSR) has recently gained attention for countering data sparsity by transferring knowledge across domains. A common approach merges domain-specific sequences into cross-domain sequences, serving as bridges that enable mutual enhancement between domains. One key challenge is to correctly extract the effective shared knowledge among these sequences and appropriately transfer it. Most existing works directly transfer unfiltered cross-domain knowledge rather than extracting domain-invariant components and adaptively integrating them into domain-specific modelings. Another challenge lies in aligning the domain-specific and cross-domain sequences. Existing methods align these sequences based on timestamps, but this approach can cause prediction mismatches when the current tokens and their targets belong to different domains. In such cases, the domain-specific knowledge carried by the current tokens may degrade performance. To address these challenges, we propose the A-B-Cross-to-Invariant Learning Recommender (\textbf{ABXI}). Specifically, leveraging LoRA's effectiveness for efficient adaptation as supported by numerous studies, our model incorporates two types of LoRAs to facilitate the adaptation process. First, all sequences are processed through a shared encoder that employs a domain LoRA for each sequence, thereby preserving unique domain characteristics. Next, we introduce an invariant projector that extracts domain-invariant interests from cross-domain representations, utilizing an invariant LoRA as well to adapt these interests into recommendations in each specific domain. Besides, to avoid prediction mismatches, all domain-specific sequences are re-aligned to match the domains of the cross-domain ground truths. Experimental results on three datasets demonstrate that our approach achieves better results than other CDSR counterparts, with an average improvement of 17.30\% in HR@10 and 18.65\% in NDCG@10.
Submission Number: 227
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview