Johnson-Lindenstrauss Transforms in Distributed Optimization

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: optimization, distributed optimization, communication compresson
Abstract: Increasing volumes of data and models in the machine learning demand efficient methods. Distributed optimization addresses these challenges, for instance, by utilizing compression mechanisms, that reduce the number of bits transmitted. One of the known techniques, that diminish the dimension of the database are Johnson-Lindenstrauss (JL) mappings, that benefit from the ease of implementation. Unlike the usual sparsification techniques, they preserve the scalar product and distances between the vectors, which is beneficial for advanced machine learning problems, such as byzantine-robust learning, personalized and vertical federated learning. In this paper, we close the gap and connect JL Transforms with optimization algorithms and demonstrate, that we can compress communication messages with them. We also validate our theoretical results by the conducted experiments.
Primary Area: optimization
Submission Number: 8812
Loading