Efficient and Secure Contribution Estimation in Vertical Federated Learning

Published: 01 Jan 2024, Last Modified: 06 Feb 2025CIKM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As necessary information about whether cooperation can be reached, rewards should be determined in advance in Vertical Federated Learning (VFL). To determine reasonable rewards, participant contributions should be estimated precisely. We propose a Vertically Federated Contribution Estimation (VF-CE) method. VF-CE calculates Mutual Information (MI) between distributed features and the label using a neural network trained via VFL itself. Note that compensation for CE is low as it only covers computation costs, and reward for real VFL training is high as it needs to cover training costs as well as participants' contributions to model performance and the resulting business benefits. Because MI presents a strong positive correlation with the final model performance, contributions to model performance can be estimated based on contributions to MI. We integrate a scalar-level attention mechanism in MI neural network. The attention weights of participants are treated as their contributions. We find that attention weights can effectively measure contribution redundancy, as its Spearman correlation coefficient with Shapley value is as high as 0.963. We demonstrate that VF-CE also satisfies properties of balance, zero element, and symmetry concerning fairness, which are hallmark properties of Shapley value. Compared with existing work, we consider contribution redundancy precisely, efficiently output approximated Shapley values through one MI calculation instead of 2 n where n is the number of participants, and introduce no extra privacy risk except the inherent risk in VFL, i.e., gradient transmission.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview