Preserving the Privacy of Reward Functions in MDPs through Deception

Published: 01 Jan 2024, Last Modified: 06 Aug 2025ECAI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Preserving the privacy of preferences (or rewards) of a sequential decision-making agent when decisions are observable is crucial in many physical and cybersecurity domains. For instance, in wildlife monitoring, forest rangers must conduct surveillance without revealing animal locations to poachers. This paper addresses privacy preservation in planning over a sequence of actions in MDPs, where the reward function represents the preference structure to be protected. Observers can use Inverse RL (IRL) to learn these preferences, making this a challenging task. Current research on Differential Privacy (DP) in this setting fails to ensure a lower bound on the minimum expected reward and offers theoretical guarantees that are inadequate against IRL-based observers. To bridge this gap, we propose a novel approach rooted in the theory of deception. Deception includes two models: dissimulation (hiding the truth) and simulation (showing the wrong). As our first contribution, we theoretically demonstrate a significant privacy leak in the current dissimulation-based method. Our second contribution is a novel RL-based planning algorithm that uses simulation to effectively address these privacy concerns while ensuring a guarantee on the expected reward. Through experimentation on multiple benchmark problems, we show that our proposed approach outperforms existing methods in preserving the privacy of reward functions. Code to reproduce the results can be found at: unmapped: uri https://github.com/shshnkreddy/DeceptiveRL
Loading