On the Fragility of Contribution Score Computation in Federated Learning

Published: 26 Oct 2025, Last Modified: 26 Mar 2026IEEE SaTML 2026EveryonearXiv.org perpetual, non-exclusive license
Abstract: This paper investigates the fragility of contribu- tion evaluation in federated learning, a critical mechanism for ensuring fairness and incentivizing participation. We argue that contribution scores are susceptible to significant distortions from two fundamental perspectives: architectural sensitivity and intentional manipulation. First, we explore how different model aggregation methods impact these scores. While most research works assume a basic averaging approach, we demonstrate that advanced techniques, including those designed to handle unreliable or diverse clients, can unintentionally yet significantly alter the final scores. Second, we examine the threat posed by poisoning attacks, where malicious participants strategically manipulate their model updates to either inflate their own contri- bution scores or reduce others’. Through extensive experiments across diverse datasets and model architectures, implemented within the Flower framework, we rigorously show that both the choice of aggregation method and the presence of attackers can substantially skew contribution scores, highlighting the need for more robust contribution evaluation schemes.
Loading