Keywords: Recommender System, Model Security, Model Extraction Attack
TL;DR: A black-box model extraction method under data-free settings for recommender system.
Abstract: Privacy and security concerns are becoming increasingly critical for recommender systems, as model extraction attack provides an effective way to probe system robustness by replicating the model’s recommendation logic — potentially exposing sensitive user preferences and proprietary algorithmic knowledge. Despite the promising performance of existing model extraction methods, they still face two key challenges: unrealistic assumptions on the requirement of accessible member or surrogate data and generalization problem where surrogate model architecture constraints lead to overfitting on generated data. To tackle these challenges, in this paper, we first thoroughly analyze how the architecture of surrogate models influences extraction attack performance, highlighting the superior effectiveness of the graph convolution architecture. Based on this, we propose a novel Data-free Black-box Graph convolution-based Recommender Model Extraction method, dubbed DBGRME. Specifically, DBGRME contains: (1) an interaction generator to alleviate the need for member data requirements in a data-free scenario; and (2) a generalization-aware graph convolution-based surrogate model to capture diverse and complex recommender interaction patterns for mitigating the overfitting issue. Experimental results on various datasets and victim models demonstrate the superiority of our attack in data-free scenarios (e.g., surpassing PTQ data-require methods with 17.4% improvement on LightGCN). Code is available: \url{https://github.com/Vencent-Won/DBGRME.git}.
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 19440
Loading