RMR: A Relative Membership Risk Measure for Machine Learning Models

Published: 2025, Last Modified: 21 Jan 2026IEEE Trans. Dependable Secur. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Privacy leakage poses a significant threat when machine learning foundation models trained on private data are released. One such threat is membership inference attacks (MIA), which determine whether a specific example was included in a model's training set. This article shifts focus from developing new MIA algorithms to measuring a model's risk under MIA. We introduce a novel metric, Relative Membership Risk (RMR), which assesses a model's MIA vulnerability from a comparative standpoint. RMR calculates the difference in prediction loss for training examples relative to a predefined reference model, enabling risk comparison across models without needing to delve into details like training strategy, architecture, or data distribution. We also explore the selection of the reference model and show that using a high-risk reference model enhances the accuracy of the RMR measure. To identify the most vulnerable reference model, we propose an efficient iterative algorithm that selects the optimal model from a set of candidates. Through extensive empirical evaluations on various datasets and network architectures, we demonstrate that RMR is an accurate and efficient tool for measuring the membership privacy risk of both individual training examples and the overall machine learning model.
Loading