Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation SystemsDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: Data Privacy, Recommendation Systems, Access Pattern Leakage, Attacks
TL;DR: We analyzed the information leakage through embedding table access in DNN-based recommendation models. We implemented the attacks evaluated them with opensource data. We demonstrated why none of the state-of-art techniques can solve the issue.
Abstract: Deep Learning-based Recommendation models use sparse and dense features of a user to predict an item that the user may like. These features carry the users' private information, service providers often protect these values by memory encryption (e.g., with hardware such as Intel's SGX). However, even with such protection, an attacker may still learn information about which entry of the sparse feature is nonzere through the embedding table access pattern. In this work, we show that only leaking the sparse features' nonzero entry positions can be a big threat to privacy. Using the embedding table access pattern, we show that it is possible to identify or re-identify a user, or extract sensitive attributes from a user. We subsequently show that applying a hash function to anonymize the access pattern cannot be a solution, as it can be reverse-engineered in many cases.
3 Replies

Loading