Metric-Normalized Posterior Leakage (mPL): Attacker-Aligned Privacy for Joint Consumption

08 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Metric differential privacy, posterior leakage, joint observation
TL;DR: We introduce mPL, show per-record mDP can fail under joint observation, and use AmPL to curb violations with minimal utility loss.
Abstract: Metric differential privacy (mDP) strengthens local differential privacy (LDP) by scaling noise to semantic distance, but many ML systems are consumed under joint observation, where model-agnostic, per-record guarantees can miss leakage from evidence aggregation. We introduce metric-normalized posterior leakage (mPL)—an attacker-aligned, distance-calibrated measure of posterior-odds shift induced by releases—and show that for single or independent releases, uniformly bounding mPL is equivalent to mDP. Under joint observation, however, satisfying mDP may still leave mPL high because learned aggregators compound evidence across correlated items. To make control practical, we formalize probabilistically bounded mPL (PBmPL), which limits how often mPL may exceed a target budget, and we operationalize it via Adaptive mPL (AmPL), a trust-and-verify pipeline that perturbs, audits with a learned attacker, and adapts parameters (with optional Bayesian remapping) to balance privacy and utility. In a word-embedding case study, neural adversaries violate mPL under joint consumption despite per-record mDP perturbations, whereas AmPL substantially lowers the frequency of such violations with low utility loss, indicating PBmPL as a practical, certifiable protection for joint-consumption settings.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 2901
Loading