Censoring with Plausible Deniability: Asymmetric Local Privacy for Multi-Category CDF Estimation

ICLR 2026 Conference Submission13200 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Local Differential Privacy, Utility-Optimized Local Differential Privacy, Distribution Estimation, Multi-Attribute Data
TL;DR: An adaptive, asymmetric local privacy mechanism that selectively censors sensitive responses, improving utility for cumulative distribution estimation of numerical data paired with categorical identifiers.
Abstract: We introduce a new mechanism within the Utility-Optimized Local Differential Privacy (ULDP) framework that enables censoring with plausible deniability when collecting and analyzing sensitive data. Our approach addresses scenarios where certain values—such as large numerical responses—are more privacy-sensitive than others, while accompanying categorical information may not be private on its own but could still be identifying. The mechanism selectively withholds identifying details when a response might indicate sensitive content, offering asymmetric privacy protection. Unlike previous methods, it avoids the need to predefine which values are sensitive, making it more adaptable and practical. Although the mechanism is designed for ULDP, it can also be applied under symmetric LDP settings, where it still benefits from censoring and reduced privacy cost. We provide theoretical guarantees, including uniform consistency and pointwise weak convergence results. Extensive numerical experiments demonstrate the validity of developed methodologies.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 13200
Loading