Personalized Federated Hypernetworks for Privacy Preservation in Multi-Task Reinforcement LearningDownload PDF


22 Sept 2022, 12:34 (modified: 18 Nov 2022, 02:52)ICLR 2023 Conference Blind SubmissionReaders: Everyone
Keywords: microgrid clusters, energy demand response, transactive energy control, neural networks, multi-agent reinforcement learning, reinforcement learning, multi-task learning, transfer learning, hypernetworks, federated learning, personalized federated learning, microgrids
TL;DR: We use hypernetworks to aggregate learning across multiple reinforcement learning agents in a microgrid energy demand response setting while preserving privacy.
Abstract: Multi-Agent Reinforcement Learning currently focuses on implementations where all data and training can be centralized to one machine. But what if local agents are split across multiple tasks, and need to keep data private between each? We develop the first application of Personalized Federated Hypernetworks (PFH) to Reinforcement Learning (RL). We then present a novel application of PFH to few-shot transfer, and demonstrate significant initial increases in learning. PFH has never been demonstrated beyond supervised learning benchmarks, so we apply PFH to an important domain: RL price-setting for energy demand response. We consider a general case across where agents are split across multiple microgrids, wherein energy consumption data must be kept private within each microgrid. Together, our work explores how the fields of personalized federated learning and RL can come together to make learning efficient across multiple tasks while keeping data secure.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
9 Replies