CoAst: Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In the federated learning (FL) process, since the data held by each participant is different, it is necessary to figure out which participant has a higher contribution to the model performance. Effective contribution assessment can help motivate data owners to participate in the FL training. The research work in this field can be divided into two directions based on whether a validation dataset is required. Validation-based methods need to use representative validation data to measure the model accuracy, which is difficult to obtain in practical FL scenarios. Existing validation-free methods assess the contribution based on the parameters and gradients of local models and the global model in a single training round, which is easily compromised by the stochasticity of DL training. In this work, we propose CoAst, a practical method to assess the FL participants' contribution without access to any validation data. The core idea of CoAst involves two aspects: one is to only count the most important part of model parameters through a weights quantization, and the other is a cross-round valuation based on the similarity between the current local parameters and the global parameter updates in several subsequent communication rounds. Extensive experiments show that the assessment reliability of CoAst is comparable to existing validation-based methods and outperforms existing validation-free methods. We believe that CoAst will inspire the community to study a new FL paradigm with an inherent contribution assessment.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: The emerging remarkable capabilities demonstrated by large models bring enormous attention to the value of collaborating on a large amount of multimodal-type data. To train DL models on data owned by different parties, federated learning (FL) is an important paradigm. However, due to disparities in the data held by different participants, each participant's contribution to the performance of the FL model differs a lot, leading to unfairness in reward distribution. Accurately evaluating the contribution of each participant benefits the promotion of data quality and creates incentives for data sharing. This work proposes an effective contribution assessment, evaluated on various computer-vision datasets.
Submission Number: 677
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview