Combining Machine Learning Defenses without Conflicts

TMLR Paper5057 Authors

08 Jun 2025 (modified: 09 Jun 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Machine learning (ML) models require protection against various risks to security, privacy, and fairness. Real-life ML models need simultaneous protection against multiple risks, necessitating combining multiple defenses effectively, without incurring significant drop in the effectiveness of the constituent defenses. We present a systematization of existing work based on how defenses are combined, and how they interact. We then identify unexplored combinations, and evaluate combination techniques to identify their limitations. Using these insights, we present, Def\Con, a combination technique which is (a) accurate (correctly identifies whether a combination is effective or not), (b) scalable (allows combining multiple defenses), (c) non-invasive (allows combining existing defenses without modification), and (d) general (is applicable to different types of defenses). We show that Def\Con achieves 90% accuracy on eight combinations from prior work, and 86% in 30 unexplored combinations evaluated empirically.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Pin-Yu_Chen1
Submission Number: 5057
Loading