Privacy and Fairness Analysis in the Post-Processed Differential Privacy Framework

Published: 01 Jan 2025, Last Modified: 15 May 2025IEEE Trans. Inf. Forensics Secur. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The post-processed Differential Privacy (DP) framework has been routinely adopted to preserve privacy while maintaining important invariant characteristics of datasets in data-release applications such as census data. Typical invariant characteristics include non-negative counts and total population. Subspace DP has been proposed to preserve total population while guaranteeing DP for sub-populations. Non-negativity post-processing has been identified to inherently incur fairness issues. In this work, we study privacy and unfairness (i.e., accuracy disparity) concerns in the post-processed DP framework. On one hand, we propose the post-processed DP framework with both non-negativity and accurate total population as constraints would inadvertently violate privacy guarantee desired by it. Instead, we propose the post-processed subspace DP framework to accurately define privacy guarantees against adversaries. On the other hand, we identify unfairness level is dependent on privacy budget, count sizes as well as their imbalance level via empirical analysis. Particularly concerning is severe unfairness in the setting of strict privacy budgets. We further trace unfairness back to uniform privacy budget setting over different population subgroups. To address this, we propose a varying privacy budget setting method and develop optimization approaches using ternary search and golden ratio search to identify optimal privacy budget ranges that minimize unfairness while maintaining privacy guarantees. Our extensive theoretical and empirical analysis demonstrates the effectiveness of our approaches in addressing severe unfairness issues across different privacy settings and several canonical privacy mechanisms. Using datasets of Australian Census data, Adult dataset, and delinquent children by county and household head education level, we validate both our privacy analysis framework and fairness optimization methods, showing significant reduction in accuracy disparities while maintaining strong privacy guarantees.
Loading