Abstract: Differential privacy (DP) has become the gold standard for privacy-preserving data analysis, but its applicability can
be limited in scenarios involving complex dependencies between
sensitive information and datasets. To address this, we introduce
differential confounding privacy (DCP), a specialized form of
the Pufferfish privacy (PP) framework that generalizes DP by
accounting for broader relationships between sensitive information and datasets. DCP adopts the (ϵ, δ)-indistinguishability
framework to quantify privacy loss. We show that while DCP
mechanisms retain privacy guarantees under composition, they
lack the graceful compositional properties of DP. To overcome
this, we propose an Inverse Composition (IC) framework, where
a leader-follower model optimally designs a privacy strategy to
achieve target guarantees without relying on worst-case privacy
proofs, such as sensitivity calculation. Experimental results validate IC’s effectiveness in managing privacy budgets and ensuring
rigorous privacy guarantees under composition.
Loading