Learning with Differentially Private Sliced Wasserstein Gradients

06 Mar 2026 (modified: 07 May 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we introduce a novel framework for privately optimizing objectives that depend on sliced Wasserstein distances between data-dependent empirical measures. Our main theoretical contribution is a non-trivial analysis of the sensitivity of the Wasserstein gradients to individual data points, derived from an explicit formulation of the gradient in a fully discrete setting. This enables strong privacy guarantees with minimal utility loss. We demonstrate that standard privacy accounting methods naturally extend to Wasserstein-based objectives, allowing for large-scale private training. This supports a wide range of private machine learning applications involving distribution matching under privacy constraints on the source, the target, or both. These include: (i) an in-processing method for fairness mitigation using a private Wasserstein penalty, and (ii) what we believe is the first approach for training private sliced Wasserstein autoencoders. We validate our framework through experiments showing its ability to effectively balance privacy and utility, offering a theoretically grounded approach to privacy-preserving machine learning with sliced Wasserstein losses.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: In this revised version, we have incorporated all improvements suggested by the reviewers to enhance the quality of the paper. All changes are indicated in blue. In particular, we highlight the inclusion of the ablation study on page 26 and the fair and private comparison with the baseline on the Adult dataset in the classification setting on pages 34–35.
Assigned Action Editor: ~Junyuan_Hong1
Submission Number: 7806
Loading