BAMBI: Vertical Federated Bilevel Optimization with Privacy-Preserving and Computation EfficiencyDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Vertical federated learning, zeroth-order estimation
TL;DR: To our best knowledge, this is the first work on the bilevel optimization under the setting of VFL.
Abstract: Vertical federated learning (VFL) has shown promising in meeting the vast demands of multi-party privacy-preserving learning. However, existing VFL methods are not applicable to popular machine learning tasks falling under bilevel programming, such as hyper-representation learning and hyperparameter tuning. A desirable solution is adopting bilevel optimization (BO) into VFL, but on-shelf BO methods are shackled by the difficulty in computing the hypergradients with privacy-preserving and computation-efficient under the setting of VFL. To address this challenge, this paper proposes a stochastic Bilevel optimizAtion Method with a desirable JacoBian estImator (BAMBI), which constructs a novel zeroth-order (ZO) estimator to locally approximate the Jacobian matrix. This approximation enables BAMBI to compute the hypergradients in a privacy-preserving and computation-efficient manner. We prove that BAMBI convergences in the rate of $\mathcal{O}(1/\sqrt{K})$ ($K$ is the total number of the upper-level iterations) under the nonconvex-strongly-convex setting which covers most practical scenarios. This convergence rate is comparable with the algorithms without ZO estimator, which justifies our advantage in privacy preservation without sacrifice in convergence rate. Moreover, we design a BAMBI-DP method for further mitigating the concerns on label privacy by leveraging the differential privacy (DP) technique. Extensive experiments fully support our algorithms. The code will be released publicly. To our best knowledge, this is the first work on the bilevel optimization under the setting of VFL.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
5 Replies

Loading