Abstract: Graph Neural Networks (GNNs) have demonstrated superior performance in a variety of graph mining and learning tasks. However, when node representations involve sensitive personal information or variables related to individuals, learning from graph data can raise significant privacy concerns. Although recent studies have explored local differential privacy (LDP) to address these concerns, they often introduce significant distortions to graph data, severely degrading private learning utility (e.g., node classification accuracy). In this paper, we present UPGNET, an LDP-based privacy-preserving graph learning framework that enhances utility while protecting user data privacy. Specifically, we propose a three-stage pipeline that generalizes the LDP protocols for node features, targeting privacy-sensitive scenarios. Our analysis identifies two key factors that affect the utility of privacy-preserving graph learning: *feature dimension* and *neighborhood size*. Based on the above analysis, UPGNET enhances utility by introducing two core layers: High-Order Aggregator (HOA) layer and the Node Feature Regularization (NFR) layer. Extensive experiments on real-world datasets indicate that UPGNET significantly outperforms existing methods in terms of both privacy protection and learning utility.
Lay Summary: Graph-based machine learning models are increasingly used to analyze social networks, biological systems, and other connected data. But in doing so, they often process sensitive personal information, raising serious privacy concerns. Existing solutions that try to protect user privacy by adding noise to the data often make these models much less accurate. Our research addresses this trade-off between privacy and performance. We developed a new framework called UPGNET that protects users’ data while keeping the model effective. It works by identifying two key factors—the dimensionality of each node’s features and the size of its local neighborhood—and improves graph learning utility by addressing both. As a result, UPGNET significantly improves the accuracy of graph learning while preserving strong privacy guarantees. This makes it a promising step toward safer, more trustworthy AI systems that can learn from sensitive data without exposing it.
Primary Area: Social Aspects->Privacy
Keywords: Differential Privacy, Graph Neural Networks, Privacy-preserving
Submission Number: 3814
Loading