Target Before You Perturb: Enhancing Locally Private Graph Learning via Task-Oriented Perturbation

ICLR 2026 Conference Submission11799 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph learning, privacy-preserving, task-oriented perturbation
Abstract: Graph neural networks (GNNs) have achieved remarkable success in graph representation learning and have been widely adopted across various domains. However, real-world graphs often contain sensitive personal information, such as user profiles in social networks, raising serious privacy concerns when applying GNNs to such data. Consequently, locally private graph learning has gained considerable attention. This framework leverages local differential privacy (LDP) to provide strong privacy guarantees for users' local data. Despite its promise, a key challenge remains: how to preserve high utility for downstream tasks (e.g., node classification accuracy) while ensuring rigorous privacy protection. In this paper, we propose TOGL, a Task-Oriented Graph Learning framework that enhances utility under LDP constraints. Unlike prior approaches that blindly perturb all attributes, TOGL first targets task-relevant attributes before applying perturbation, enabling more informed and effective privacy mechanisms. It unfolds in three phases: locally private feature perturbation, task-relevant attribute analysis, and task-oriented private learning. This structured process enables TOGL to provide strict privacy protection while significantly improving the utility of graph learning. Extensive experiments on real-world datasets demonstrate that TOGL substantially outperforms existing methods in terms of privacy preservation and learning effectiveness.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 11799
Loading