Evaluating the Impact of Text De-Identification on Downstream NLP TasksDownload PDF

Published: 20 Mar 2023, Last Modified: 11 Apr 2023NoDaLiDa 2023Readers: Everyone
Keywords: anonymisation, text de-identification, NLP, BERT, ERNIE, impact
TL;DR: We investigated the impact of text de-identification of text on the performance of downstream tasks.
Abstract: Data anonymisation is often required to comply with regulations when transfering information across departments or entities. However, the risk is that this procedure can distort the data and jeopardise the models built on it. Intuitively, the process of training an NLP model on anonymised data may lower the performance of the resulting model when compared to a model trained on non-anonymised data. In this paper, we investigate the impact of de-identification on the performance of nine downstream NLP tasks. We focus on the anonymisation and pseudonymisation of personal names and compare six different anonymisation strategies for two state-of-the-art pre-trained models. Based on these experiments, we formulate recommendations on how the de-identification should be performed to guarantee accurate NLP models. Our results reveal that de-identification does have a negative impact on the performance of NLP models, but this impact is relatively low. We also find that using pseudonymisation techniques involving random names leads to better performance across most tasks.
Student Paper: Yes, the first author is a student
4 Replies

Loading