Explainable vertical federated learning for healthcare: Ensuring privacy and optimal accuracy

Published: 18 Dec 2024, Last Modified: 26 Jan 2026IEEE Big Data 2024EveryoneCC BY 4.0
Abstract: Vertical Federated Learning (VFL) provides a secure, collaborative machine learning framework that allows multiple institutions, each holding different subsets of features, to jointly train models without sharing sensitive data. Despite its advantages, VFL requires a careful balance between multiple factors, such as explainability, privacy, and data security. Enhancing model interpretability often necessitates revealing more about the underlying data, which can compromise privacy. Conversely, strong privacy safeguards may obscure the model’s decision-making process, hindering explainability. In this paper, we explore this critical explainability-privacy trade-off and propose a novel framework designed to navigate this balance while ensuring robust utility and accuracy. We demonstrate the effectiveness of our framework using a real-world healthcare dataset, focusing on scenarios where both interpretability and privacy are paramount. The numerical experiments showcase how our approach maintains model accuracy while providing interpretable insights into predictions, all while preserving privacy at critical junctures. This work underscores the importance of addressing the explainability-privacy dichotomy in federated learning systems, offering a path toward building transparent, trustworthy AI models in sensitive domains such as healthcare.
Loading