LoRA-FL: A Low-Rank Adversarial Attack for Compromising Group Fairness in Federated Learning

Published: 10 Jun 2025, Last Modified: 29 Jun 2025CFAgentic @ ICML'25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Group Fairness, Adversarial Attacks
TL;DR: We introduce LoRA-FL, a stealthy fairness attack in Federated Learning that uses low-rank adapters to bias models without sacrificing accuracy or triggering robust defenses like KRUM.
Abstract: Federated Learning (FL) enables collaborative model training without sharing raw data, but agent distributions can induce unfair outcomes across sensitive groups. Existing fairness attacks often degrade accuracy or are blocked by robust aggregators like $\texttt{KRUM}$. We propose $\texttt{LoRA-FL}$: a stealthy adversarial attack that uses low-rank adapters to inject bias while closely mimicking benign updates. By operating in a compact parameter subspace, $\texttt{LoRA-FL}$ evades standard defenses without harming accuracy. On standard fairness benchmarks (Adult, Bank, Dutch), $\texttt{LoRA-FL}$ reduces fairness metrics (DP, EO) by over 40\% with only 10–20\% adversarial agents, revealing a critical vulnerability in FL’s fairness-security landscape.
Submission Number: 7
Loading