Keywords: graph representation learning, privacy, expressivity
TL;DR: We study the trade-off between provably expressive and private representations in graph learning
Abstract: We investigate the trade-off between expressive power and privacy guarantees in graph representation learning. Privacy-preserving machine learning faces growing regulatory demands that pose a fundamental challenge: safeguarding sensitive data while maintaining expressive power. To address this challenge, we propose homomorphism density vectors to obtain graph embeddings that are private and expressive.
Homomorphism densities are provably highly discriminative and offer a powerful tool for distinguishing non-isomorphic graphs.
By adding noise calibrated to each density’s sensitivity, we ensure that the resulting embeddings satisfy formal differential privacy guarantees. Our theoretical construction preserves expressivity in expectation, as each private embedding remains unbiased with respect to the true homomorphism densities. Our embeddings match, in expectation, the expressive power of a broad range of graph neural networks (GNNs), such as message-passing and subgraph GNNs, while providing formal privacy guarantees.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16849
Loading