Cache Me If You Can: The Case For Retrieval Augmentation in Federated Learning

Published: 05 Mar 2024, Last Modified: 04 May 2024PMLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Retrieval Augmentation, Privacy, GDPR, Regulatory compliance, Non parametric, RAG
TL;DR: This study introduces Retrieval Augmentation to enhance Federated Learning for improved privacy and regulatory compliance.
Abstract: We propose retrieval augmentation (RA) as an enhancement to federated learning (FL) that can improve privacy protection and ensure regulatory compliance. FL, primarily designed for data privacy preservation, faces challenges with conventional parametric models which are susceptible to privacy breaches and potentially non-compliant with regulations such as data erasure mandates. RA addresses these issues by integrating a retrieval-based method during the inference phase, achieving "perfect secrecy" by limiting server access to private documents and reducing barriers to compliance. This study conducts a thorough evaluation of RA's efficacy within the FL paradigm, positioning it as a preferable alternative to traditional parametric models within analogous memory constraints. We characterize potential applications that may benefit from RA in FL, showing in particular that it is well-suited for knowledge-intensive, few-shot environments—offering scalable inference-time operations, source attribution, and the ability to dynamically update and unlearn knowledge for compliance. We present a new modeling framework, named Raffle, to investigate RA for FL applications with labeled and unlabeled data. Implementing Raffle in homogeneous settings for few-shot question answering, we explore the influence on client participation dynamics and the importance of passage index composition for effective generalization.
Submission Number: 9
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview