Keywords: Distributionally Robust Optimization (DRO), Generalization Bound, Federated Model Evaluation, Glivenko-Cantelli Theorem and DKW Bound
Abstract: In this paper, we address the challenge of certifying the performance of a machine learning model on an unseen target network. We consider a source network “A” of $K$ clients, each with private data from unique and heterogeneous distributions, assumed to be independent samples from a broader meta-distribution $ \mu $. Our goal is to provide certified guarantees for the model’s performance on a different, unseen target network “B,” governed by another meta-distribution $ \mu' $, assuming the deviation between $\mu$ and $\mu'$ is bounded by either the {\it Wasserstein} distance or an $f$-{\it divergence}. We derive theoretical guarantees for the model’s empirical average loss and provide uniform bounds on the risk CDF, where the latter correspond to novel and adversarially robust versions of the Glivenko-Cantelli theorem and the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality. Our bounds are computable in polynomial time with a polynomial number of queries to the $K$ clients, preserving client privacy by querying only the model’s (potentially adversarial) loss on private data. We also establish non-asymptotic generalization bounds that consistently converge to zero as both $K$ and the minimum client sample size grow. Extensive empirical evaluations validate the robustness and practicality of our bounds across real-world tasks.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7895
Loading