Patching LLM like Software: A Lightweight Method for improving existing policy in Large Language Models

ICLR 2026 Conference Submission14669 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model safety, Parameter-efficient fine-tuning, Toxicity mitigation, Bias reduction, Direct Preference Optimization (DPO), Soft prompts
Abstract: We propose $\textit{patching}$ for large language models (LLM) like software versions, a lightweight and modular approach for addressing safety vulnerability. While vendors release improved LLM versions, but major releases are costly, infrequent and difficult to tailor to customer needs, leaving released models with known safety gaps. Unlike full-model fine-tuning or major version updates, our method enables rapid remediation by prepending a compact, learnable prefix to an existing model. This “patch” introduces only $0.003\%$ additional parameters, yet reliably steers model behavior toward that of a safer reference model. Across three critical domains—toxicity mitigation, bias reduction, and harmfulness refusal—policy patches achieve safety improvements comparable to next-generation safety aligned models while preserving fluency. Our results demonstrate that LLMs can be “patched” much like software, offering vendors and practitioners a practical mechanism for distributing scalable, efficient, and composable safety updates between major model releases.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14669
Loading