Keywords: Subset Training, Membership Inference Attack
Abstract: Subset training, where models are trained on a carefully chosen portion of data rather than the entire dataset, has become a standard tool for scaling modern machine learning. From coreset selection in vision to large-scale filtering in language models, these methods promise scalability without compromising utility. A common intuition is that training on fewer samples should also reduce privacy risks. In this paper, we challenge this assumption. We show that subset training is not privacy free: the very choices of which data are included or excluded can introduce new privacy surface and leak more sensitive information. Such information can be captured by adversaries either through side-channel metadata from the subset selection process or via the outputs of the target model. To systematically study this phenomenon, we propose CoLa (Choice Leakage Attack), a unified framework for analyzing privacy leakage in subset selection. In CoLa, depending on the adversary’s knowledge of the side-channel information, we define two practical attack scenarios: Subset-aware Side-channel Attacks and Black-box Attacks. Under both scenarios, we investigate two privacy surfaces unique to subset training: (1) Training-membership MIA (TM-MIA), which concerns only the privacy of training data membership, and (2) Selection-participation MIA (SP-MIA), which concerns the privacy of all samples that participated in the subset selection process. Notably, SP-MIA enlarges the notion of membership from model training to the entire data–model supply chain. Experiments on vision and language models show that existing threat models underestimate the privacy risks of subset training: the enlarged privacy surface not only retains training membership leakage but also exposing selection membership, extending risks from individual models to the broader ML ecosystem.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 1101
Loading