Keywords: Pre-trained Encoders, Privacy Attacks, AI Security
TL;DR: We unveil a new privacy attack that can extract sensitive encoder information from a targeted downstream ML service and further promote other ML attacks against the targeted service.
Abstract: Pre-trained encoders available online have been widely adopted to build downstream machine learning (ML) services, but various attacks against these encoders also post security and privacy threats toward such a downstream ML service paradigm.
We unveil a new vulnerability: the Pre-trained Encoder Inference (PEI) attack, which can extract sensitive encoder information from a targeted downstream ML service that can then be used to promote other ML attacks against the targeted service.
By only providing API accesses to a targeted downstream service and a set of candidate encoders, the PEI attack can successfully infer which encoder is secretly used by the targeted service based on candidate ones.
Compared with existing encoder attacks, which mainly target encoders on the upstream side, the PEI attack can compromise encoders even after they have been deployed and hidden in downstream ML services, which makes it a more realistic threat.
We empirically verify the effectiveness of the PEI attack on vision encoders.
we first conduct PEI attacks against two downstream services (i.e., image classification and multimodal generation),
and then show how PEI attacks can facilitate other ML attacks (i.e., model stealing attacks vs. image classification models and adversarial attacks vs. multimodal generative models).
Our results call for new security and privacy considerations when deploying encoders in downstream services.
The code is submitted and will be released publicly.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 12677
Loading