Training a Tokenizer for Free with Private Federated LearningDownload PDF

Published: 27 Mar 2022, Last Modified: 05 May 2023FL4NLP@ACL2022Readers: Everyone
Keywords: tokenization, private federated learning
TL;DR: It is possible to train a neural-net language model with private federated learning, but training a tokenizer is not, so we propose an indirect method without spending additional privacy budget which works well.
Abstract: Federated learning with differential privacy, i.e. private federated learning (PFL), makes it possible to train models on private data distributed across users' devices without harming privacy. PFL is efficient for models, such as neural networks, that have a fixed number of parameters, and thus a fixed-dimensional gradient vector. Such models include neural-net language models, but not tokenizers, the topic of this work. Training a tokenizer requires frequencies of words from an unlimited vocabulary, and existing methods for finding an unlimited vocabulary need a separate privacy budget. A workaround is to train the tokenizer on publicly available data. However, in this paper we first show that a tokenizer trained on mismatched data results in worse model performance compared to a privacy-violating "oracle" tokenizer that accesses user data, with perplexity increasing by 20%. We also show that sub-word tokenizers are better suited to the federated context than word-level ones, since they can encode new words, though with more tokens per word. Second, we propose a novel method to obtain a tokenizer without using any additional privacy budget. During private federated learning of the language model, we sample from the model, train a new tokenizer on the sampled sequences, and update the model embeddings. We then continue private federated learning, and obtain performance within 1% of the "oracle" tokenizer. We show that, since this process trains the tokenizer on the server using data for which the privacy loss has already been accounted for, our method spends no additional privacy budget.
6 Replies

Loading