Empowering Federated Graph Rationale Learning with Latent Environments

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Graph algorithms and modeling for the Web
Keywords: Federated Learning, Graph Rationalization, Explainability
Abstract: The success of Graph Neural Networks (GNNs) in graph classification has heightened interest in explainable GNNs, particularly through graph rationalization. This method aims to enhance GNNs explainability by identifying subgraph structures (i.e., rationales) that support model predictions. However, existing methods often rely on centralized datasets, posing challenges in scenarios where data privacy is crucial, such as in molecular property prediction. Federated Learning (FL) offers a solution by enabling collaborative model training without sharing raw data. In this context, Federated Graph Rationalization emerges as a promising research direction. However, in each client, the rationalization methods often rely on client-specific shortcuts to compose rationales and make task predictions. Data heterogeneity, characterized by non-IID data across clients, exacerbates this problem, leading to poor prediction performance. To address these challenges, we propose the Environment-aware Data Augmentation (EaDA) method for Federated Graph Rationalization. EaDA comprises two main components: the Environment-aware Rationale Extraction (ERE) module and the Local-Global Alignment (LGA) module. The ERE module employs prototype learning to infer and share abstract environment information across clients, which are then aggregated to form a global environment. This information is used to generate counterfactual samples for local clients, enhancing the robustness of task predictions. The LGA module uses contrastive learning methods to align local and global rationale representations, mitigating performance degradation due to data heterogeneity. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of our approaches.
Submission Number: 856
Loading