Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry

TMLR Paper4215 Authors

15 Feb 2025 (modified: 05 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The integration of deep learning into diverse high-stakes scientific applications demands a careful balance between Privacy and Explainability. This work explores the interplay between two essential requirements: Right-to-Privacy (RTP), enforced through differential privacy (DP)—the gold standard for privacy-preserving machine learning due to its rigorous guarantees—and Right-to-Explanation (RTE), facilitated by post-hoc explainers, the go-to tools for model auditing. We systematically assess how DP influences the applicability of widely used explanation methods, uncovering fundamental intricacies between privacy-preserving models and explainability objectives. Furthermore, our work throws light on how RTP and RTE can be reconciled in high-stakes. Our study, with the example of a wildly used use-case, concludes by outlining a novel software pipeline that upholds RTP and RTE requirements.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: Antti Honkela
Submission Number: 4215
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview