SparseVFL: Communication-Efficient Vertical Federated Learning Based on Sparsification of Embeddings and GradientsDownload PDF

Published: 25 Jun 2023, Last Modified: 18 Jul 2023FL4Data-Mining PosterReaders: Everyone
Keywords: federated learning, vertical federated learning, communication cost
TL;DR: SparseVFL effectively reduces communication costs for Vertical Federated Learning.
Abstract: In Vertical Federated Learning, a server coordinates a group of clients to perform forward and backward propagation through a neural network. The server and clients exchange intermediate embedding and gradient data, which results in high communication cost. Traditional approaches trade off the amount of data exchanged with the model accuracy. In this work, we propose the SparseVFL algorithm in order to reduce the amount of exchanged data, while maintaining the model accuracy. In both the forward and backward propagation, SparseVFL makes sparse embeddings and gradients based on the combination of ReLU activation, L1-norm of embedding vectors, masked-gradient, and run-length coding. Our simulation results show that SparseVFL outperforms existing methods. SparseVFL can reduce the data size by 68--81\% and the training time by 63\% at a communication throughput of 10 Mbps between the server and clients.
1 Reply

Loading