AFRC: Adaptive Responsible Compression for Federated Learning under Data Heterogeneity

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 ExtendedAbstractEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated multiagent learning, Coordination mechanisms, Fairness, Differential privacy, Adaptive control
TL;DR: AFRC is an adaptive compression framework for federated learning that balances accuracy, fairness, privacy, and communication under non-IID data, outperforming strong baselines on image and language tasks.
Abstract: We present Adaptive Responsible Compression for Federated Learning under Data Heterogeneity (AFRC), an adaptive mechanism that operationalises responsible multiagent learning by jointly regulating model compression, cross-agent equity, and differential privacy. AFRC introduces two feedback controllers: a proportional-integral fairness controller that dynamically adjusts per-round fairness pressure to drive equitable agent outcomes (low inter-agent accuracy variance and higher minimum accuracy), and a budget-aware privacy controller that schedules the DP noise multiplier to exactly meet a global $(\varepsilon,\delta)$ target while preserving late-stage utility. Across $100$ agents on CIFAR-10 and Shakespeare with severe non-IID partitions, AFRC achieves 5-10\% higher average accuracy than strong baselines while reducing inter-agent accuracy variance by >30\%, at 80\% sparsity and a fixed $\varepsilon=5$ budget. We analyse the system's feedback dynamics and provide convergence guarantees for a simplified variant. AFRC demonstrates that adaptive, mechanism-based coordination is essential to balance utility, equity, privacy, and efficiency in decentralised multiagent learning.
Area: Learning and Adaptation (LEARN)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 1802
Loading