Tutorial on Fair and Private Deep Learning

Published: 01 Jan 2024, Last Modified: 14 Aug 2024COMAD/CODS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep Learning (DL) finds application in several prominent fields, including computer vision, natural language processing, and bioinformatics. The proliferation of DL-based methods has brought to notice critical issues about bias (or unfairness) in classification and weak privacy guarantees of the training data. It is crucial to prioritize addressing these issues to prevent the potentially significant negative impact on users. While there has been progress, majority of the works focus on independently resolving fairness and privacy. We propose a tutorial on “Fair and Private Deep Learning” – aimed to provide an exhaustive discussion on (i) reasons behind unfair classifications and lack of privacy, (ii) fairness notions in literature and methods to ensure them, (iii) differentially private DL, and (iv) algorithms that address fair and private DL simultaneously. Moreover, in this tutorial, we not just limit our attention to classical, centralized DL models but also to the fairness and privacy challenges in distributed (or federated) DL. The code, presentation and other details are available at https://github.com/magnetar-iiith/FairPrivateDL.
Loading