Connecting Membership Inference Privacy and Generalization through Instance-Wise Measurements

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: generalization, membership inference privacy, hypothesis testing
Abstract: Membership Inference Attacks (MIAs) seek to assess the privacy risk of a model by extracting membership information, which represents a fundamental unit of information that a model contains. A prevailing intuition in the MIA literature is that decreasing the amount of information in a neural network should improve both privacy risk and generalization ability. Despite the intuitive connection, both theoretical and empirical work has suggested that regularization, whether implicit or explicit, has widely different effects on privacy risk across the individual points in the training data. In this work, we take a first step towards understanding the relationship between privacy and generalization by deriving an instance-wise measurement of Membership Inference Privacy (MIP). We then connect this definition to generalization bounds using a data-dependent prior on the weight distribution.
Submission Number: 69
Loading