Learning to Keep Secrets: Empowering Local LLMs with Autonomous Dynamic Privacy-Conscious Delegation

ACL ARR 2026 January Submission4408 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: privacy, LLM
Abstract: While cloud-hosted Large Language Models (LLMs) offer superior capabilities, they introduce significant privacy risks. Conversely, local models ensure data sovereignty but often suffer from limited performance. Privacy-conscious delegation provides a solution by enabling local models to collaborate with remote models. In privacy-conscious delegation, a local model synthesizes a privacy-preserving prompt, queries a remote model, and combines the returned information with private context locally. However, existing methods rely on prompting to enforce privacy constraints, which can be brittle for local models, and they typically delegate in a static manner, causing unnecessary exposure when a local solution is sufficient. To address these limitations, we propose ADAPT (Autonomous Delegation Agent with Privacy Training), a framework that transforms delegation from a static pipeline into a dynamic, learnable agent. We train the local model to autonomously manage the entire execution path--from determining delegation triggers to synthesizing privacy-preserving prompts--thereby fostering robust, intrinsic privacy-preserving capabilities. Experimental results demonstrate that ADAPT significantly outperforms existing baselines, achieving a superior performance.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: security and privacy
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 4408
Loading