Efficient Differentially Private Tensor Factorization in the Parallel and Distributed Computing Paradigm

Published: 01 Jan 2023, Last Modified: 09 Feb 2025ISPA/BDCloud/SocialCom/SustainCom 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Tensor factorization plays a fundamental role in multiple areas of AI research. Nevertheless, it encounters significant challenges related to privacy breaches and operational efficiency. In this study, we propose a novel approach that addresses both of these issues simultaneously by integrating differential privacy with parallel and distributed computing. To accommodate diverse scenarios, we introduce two models: DPTF-SVRG and ADMM-DPTF, each leveraging specific techniques. DPTF-SVRG is designed for single-GPU environments and utilizes a unique strategy to reduce gradient variance, enabling faster convergence compared to SGD. Moreover, it achieves parallelism on the GPU through a lock-free asynchronous approach. On the other hand, ADMM-DPTF utilizes distributed ADMM to parallelize DPTF-SVRG, enabling multi-GPU parallelism. Experimental results demonstrate that our algorithms outperform existing benchmarks while maintaining differential privacy.
Loading