Analyzing Privacy Loss in Updates of Natural Language Models

Anonymous

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • TL;DR: comparing updates of language models reveals many details about changes in training data
  • Abstract: To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.
  • Keywords: Language Modelling, Privacy
0 Replies

Loading