LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings

ICLR 2026 Conference Submission21745 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model Extraction Attacks, Membership Inference Attacks, Black-box Attacks, Security and Privacy in Machine Learning, Adversarial Machine Learning
TL;DR: We propose a cost-effective label-only membership inference attack scheme using model extraction as pre-cursor and performing offline inference on an extracted model.
Abstract: Membership inference attacks (MIAs) threaten the privacy of machine learning models by revealing whether a specific data point was used during training. Existing MIAs often rely on impractical assumptions—such as access to public datasets, shadow models, confidence scores, or training data distribution knowledge—making them vulnerable to defenses like confidence masking and adversarial regularization. Label-only MIAs, even under strict constraints suffer from high query requirements per sample. We propose a cost-effective label-only MIA framework based on transferability and model extraction. By querying the target model $M$ using active sampling, perturbation-based selection, and synthetic data, we extract a functionally similar surrogate $S$ on which membership inference is performed. This shifts query overhead to a one-time extraction phase, eliminating repeated queries to $M$. Operating under strict black-box constraints, our method matches the performance of state-of-the-art label-only MIAs while significantly reducing query costs. On benchmarks including Purchase, Location, and Texas Hospital, we show that a query budget equivalent to testing $\approx1\%$ of training samples suffices to extract $S$ and achieve membership inference accuracy within $\pm1\%$ of $M$. We also evaluate the effectiveness of standard defenses (e.g., DP-SGD, regularization) proposed for label-only MIAs against our attack.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21745
Loading