Membership inference attacks against language models via neighbourhood comparison
Abstract: Membership Inference attacks (MIAs) aim to
predict whether a data sample was present in
the training data of a machine learning model
or not, and are widely used for assessing the
privacy risks of language models. Most existing attacks rely on the observation that models
tend to assign higher probabilities to their training samples than non-training points. However,
simple thresholding of the model score in isolation tends to lead to high false-positive rates as
it does not account for the intrinsic complexity
of a sample. Recent work has demonstrated that
reference-based attacks which compare model
scores to those obtained from a reference model
trained on similar data can substantially improve the performance of MIAs. However, in
order to train reference models, attacks of this
kind make the strong and arguably unrealistic assumption that an adversary has access to
samples closely resembling the original training data. Therefore, we investigate their performance in more realistic scenarios and find that
they are highly fragile in relation to the data distribution used to train reference models. To investigate whether this fragility provides a layer
of safety, we propose and evaluate neighbourhood attacks, which compare model scores for
a given sample to scores of synthetically generated neighbour texts and therefore eliminate
the need for access to the training data distribution. We show that, in addition to being competitive with reference-based attacks that have
perfect knowledge about the training data distribution, our attack clearly outperforms existing reference-free attacks as well as referencebased attacks with imperfect knowledge, which
demonstrates the need for a reevaluation of the
threat model of adversarial attacks
Loading