Subword embedding from bytes against embedding-based attacks

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Privacy, embedding, bytes, Transformer, language model, federated learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose Subword Embedding from Bytes (SEB) as a novel defense that can protect privacy while maintaining efficiency and accuracy.
Abstract: NLP models have grown as a powerful technology and impact our social life like never before, along with rising concerns in practical applications including privacy invasion and high computational cost. While federated learning alleviates these problems, attackers can still recover the private training data of victim clients by leveraging the transmitted model parameters and gradients. Protecting against such attacks of private information leakage remains an open challenge. We propose Subword Embedding from Bytes (SEB) as a novel solution that can protect privacy while maintaining efficiency and accuracy. Our experiments demonstrate that SEB can effectively protect against embedding-based attacks, which recover the sentences in a batch of text data, based on the gradients in federated learning. As a defense, SEB does not compromise the model's accuracy. We also verify that SEB obtains comparable and even better results over traditional subword embedding methods in machine translation, sentiment analysis, and language modeling.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5648
Loading