Abstract: Vertical federated learning (VFL) allows multiple clients with misaligned feature spaces to collaboratively accomplish the global model training. Applying VFL to high stakes decision scenarios greatly requires model interpretation for decision reliability and diagnosis. However, the feature discrepancy in VFL raises new issues for model interpretation in distributed setting: one is from the local-global perspective, where the local importance of features is not equal to the global importance; and the other is from the local-local perspective, where information asymmetry among clients causes difficulty in identifying overlapped features. In this work, we propose a new distributed Model Interpretation method for Vertical Federated Learning with feature discrepancy, namely MI-VFL. In particular, to deal with the local-global discrepancy, MI-VFL leverages the law of total probability to adjust the local importance of features and ensures the completeness of the selected features using adversarial game. To handle the local-local discrepancy, MI-VFL builds a federated adversarial learning model to efficiently identify the overlapped features once, rather than performing client-to-client intersections multiple times. We extensively evaluate MI-VFL on six synthetic datasets and five real-world datasets. The evaluation results reveal that MI-VFL can accurately identify the important features, suppress the overlapped features, and thus improve the model performance.
Loading