Efficient and Adaptive Recommendation Unlearning: A Guided Filtering Framework to Erase Outdated Preferences

Yizhou Dang, Yuting Liu, Enneng Yang, Guibing Guo, Linying Jiang, Jianzhe Zhao, Xingwei Wang

Published: 31 Mar 2025, Last Modified: 12 Nov 2025ACM Transactions on Information SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Recommendation unlearning is an emerging task to erase the influences of user-specified data from a trained recommendation model. Most existing research follows the paradigm of partitioning the original dataset into multi-fold and then retraining corresponding sub-models while those influences are totally removed. Despite the effectiveness, two key problems remain unexplored: (i) Existing work becomes inefficient and computationally expensive to retrain all sub-models, especially when facing large amounts of unlearning data. (ii) User preferences are dynamically changing. If users express negative opinions on some interacted items they used to prefer, how can we adaptively erase the outdated preferences behind such transformation from the trained model? Although these unlearning data contain outdated information, there is still a lot of helpful knowledge worth preserving. Existing methods ignore this preservation during unlearning and may remove all the knowledge in the interactions, compromising the final performance. In light of these limitations, we propose a novel unlearning framework called GFEraser, which transforms the unlearning into an efficient guided filtering process to avoid time-consuming retraining and retain beneficial knowledge. Specifically, we develop an intra-user negative sampling strategy to learn the outdated preferences that need to be erased. Under the guidance of differential maximization agreement and attention-based fusion module, the original representations are adaptively filtered and aggregated based on the learned preferences. Besides, we leverage contrastive learning to preserve the invariant user preferences, maintaining the final performance. Finally, we devise a new metric called Ranking Decrease Rate to evaluate the unlearning effect. Experimental results demonstrate that GFEraser can maintain reliable recommendation performance while achieving efficient outdated preferences unlearning, up to 37\(\times\) acceleration.
External IDs:doi:10.1145/3706633
Loading