Keywords: Industrial Internet of Things, Federated Graph Learning, Graph Neural Networks, Attention Mechanism, Clustering Algorithm
TL;DR: ${FGL}_{AC}$ is a federated graph learning framework using clustering and attention. It preprocesses client data via clustering and optimizes parameter aggregation with attention mechanisms to enhance model performance.
Abstract: With the development of the industrial Internet of Things, graph data is also increasing, but these data are held by different clients, and due to client privacy and data security, it is impossible to integrate all the data for unified model training. Federated graph learning can overcome this difficulty very well. It allows clients to participate in the training of the overall model of other clients without revealing their own private data during training, thus protecting the security of clients' private data. However, how to improve the utilization efficiency of client upload parameters to improve the effect of model training and how to process the large amount of initial data owned by clients is an issue that needs to be solved urgently. This paper proposes a federated graph learning framework with attention mechanism and clustering algorithm (${FGL}_{AC}$). First, before the client participates in training, a clustering algorithm is used to perform a simple preprocessing operation on the large amount of data held to reduce the overall model training burden and improve training accuracy. Then during the server's process of aggregating model parameters, through the adaptive ability of the attention mechanism, the parameters uploaded by different clients are configured with different weights to obtain the best weight parameters to improve the training effect of the overall model. In order to further verify the effectiveness of ${FGL}_{AC}$, experimental verification was conducted on different data sets. The results show that in most cases, ${FGL}_{AC}$ can achieve an improvement of 2.63\% - 4.03\% compared to other federated graph learning frameworks.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10377
Loading