Keywords: Recommender system, Recommendation unlearning, Collaborative filtering, Security and privacy
TL;DR: We propose COVA, a novel framework that performs recommendation unlearning via task vector arithmetic in SVD-derived embedding space, achieving 18.83% better completeness and 38.5× speedup while maintaining utility.
Abstract: Driven by the growing need for data privacy, machine unlearning seeks to efficiently remove the influence of specific data from trained models without costly retraining. This challenge is particularly sensitive in recommendation unlearning because collaborative filtering (CF) inherently entangles interactions' influence across the entire user-item latent space, making its precise removal non-trivial. However, prevailing paradigms exhibit fundamental limitations; partition-based methods fragment the interaction structure by design, while influence function-based approaches focus on localized parameter adjustments, failing to capture broader collaborative patterns. In this paper, we propose COVA (COllaborative Vector Arithmetic), a novel framework that directly address these issues. Specifically, COVA constructs shared orthogonal latent space that preserves collaborative patterns across the entire interaction matrix. Within this space, unlearning is performed by subtracting task vectors. Notably, whereas task vector arithmetic traditionally operates in the parameter space, we reinterpret it for the embedding space to align with the learning mechanism of CF. Therefore, our output-level approach operates directly on the prediction matrix of any CF model, without any access to model internals or training procedures. Experiments on three benchmark datasets demonstrate that COVA improves unlearning completeness by up to 18.83% and achieves a speedup ranging from 15 to 38.5 times over the strongest baseline, while maintaining comparable utility to the retrained model.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25062
Loading