Can Language Models Safeguard Themselves, Instantly and For Free?

Published: 28 Jun 2024, Last Modified: 25 Jul 2024NextGenAISafety 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: free alignment, self-generated data, self-alignment
TL;DR: We investigate if free safety alignment is possible.
Abstract: Aligning pretrained language models (LMs) to handle a new safety scenario is normally difficult and expensive, often requiring access to large amounts of ground-truth preference data and substantial compute. Are these costs necessary? That is, is it possible to safeguard an LM using only inherent model knowledge and without additional training? We tackle this challenge with AlignEZ, a novel approach that uses (1) self-generated preference data and (2) representation editing to provide nearly cost-free safety alignment. During inference, AlignEZ modifies LM representations to reduce undesirable and boost desirable components using subspaces identified via self-generated preference pairs. Our experiments reveal that this nearly cost-free procedure significantly narrows the gap between base pretrained and tuned models by an average of 17%, observed across three datasets and three model architectures. Additionally, we study the conditions under which improvement using AlignEZ is feasible, providing valuable insights into its effectiveness.
Submission Number: 63
Loading