MIABench: Full-Pipeline Evaluation of Membership Inference Attacks

ICLR 2026 Conference Submission18489 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Membership Inference Attack; Machine Learning Privacy
Abstract: Membership inference attacks (MIAs) are widely used to assess a model's vulnerability to privacy leakage by determining whether specific data instances were part of its training set. Despite their significance as a privacy metric, existing evaluations of MIAs are often limited to isolated and inconsistent scenarios, hindering comprehensive comparisons and practical insights. To address this limitation, we analyze the full pipeline of training models and conducting MIAs, and present a comprehensive benchmark for evaluating various MIA methods on deep learning models. We establish a reproducible benchmark suite with code and models, leaderboards, detailed insights into the mechanisms of different MIA approaches, and practical guidance for selecting and applying MIAs effectively. This work enhances the understanding and application of MIAs, providing a solid foundation for advancing privacy-preserving machine learning research.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 18489
Loading