Keywords: Robustness, Byzantine Attacks, non-IID data, Distributed Learning
TL;DR: We revisit classical Byzantine attacks and show that, when calibrated to account for data heterogeneity, they severely degrade the performance of state-of-the-art defenses.
Abstract: In this work, we focus on Byzantine-resilient distributed learning. While considerable efforts have been made to develop robust aggregations, the underlying threat model, especially under non-IID data distributions, remains under-explored. This imbalance may create a false sense of security about the effectiveness of current defenses. To address this gap, we revisit and calibrate existing Byzantine attacks to better reflect the challenges of leaning on heterogeneous data, enabling more realistic stress testing of defenses. Through systematic evaluation on standard benchmark datasets and using diverse partitioning strategies, we show that data heterogeneity provides adversaries with a larger leeway for model poisoning. We leverage this insight to critically evaluate existing defenses. Our findings underscore the need to assess robustness not only through defense design, but also through carefully calibrated and realistic threat models.
Submission Number: 194
Loading