Keywords: Mixture-of-Experts, privacy, ml-security, information security, buffer overflow, leakage, exploit, token dropping
TL;DR: We present a novel attack against MoE architectures that exploits Token Dropping in expert-choice routing to steal user prompts.
Abstract: Mixture of Expert (MoE) models improve the efficiency and scalability of dense language models by \emph{routing} each token to a small number of experts in each layer of the model. In this paper, we show how an adversary that can arrange for their queries to appear in the same batch of examples as a victim's queries can exploit expert-choice routing to the full disclosure of a victim's prompt. We successfully demonstrate the effectiveness of this attack on a two-layered Mixtral model. Our results show that we can extract the entire prompt using $\mathcal{O}(\text{Vocabulary size} \times \text{prompt length}^2)$ queries or a maximum of 100 queries per token in the setting we consider. Our work is the first of its kind data reconstruction attack that originates from in a flaw in the model architecture, as opposed to the model parameterization.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7546
Loading