Abstract: End-to-end encrypted messaging services (EEMSs) empower private communication through encrypting messages, yet also make content moderation for combating the spread of abusive messages challenging. There is an urgent call for supporting content moderation in EEMSs while ensuring user privacy. In this paper, we present a new system design for privacy-assured content moderation in EEMSs. At a high level, users in our system can privately report abusive messages, and the EEMS traces the source if a message has an aggregated report count exceeding a predefined threshold and is audited to be abusive. Our system mainly departs from prior works in that it allows flexible and adaptable thresholds, offers robustness against dishonest reporters providing malformed reports, and better ensures the privacy of all users during the moderation process. We also take a step further and propose a privacy-aware detection mechanism that relies on a blocklist built with transparency to mitigate the further spread of identified abusive messages from forwarders. Formal security analysis is provided and extensive experiments demonstrate the practical efficiency of our system.
External IDs:dblp:journals/tsc/LianZMCWJ25
Loading