Abstract: We introduce a novel inference-time alignment approach for LLMs that aims to generate safe responses almost surely, i.e., with probability approaching one w.r.t. a given cost model. Our approach models the generation of safe responses as a constrained Markov Decision Process (MDP) within the LLM's latent space. We augment a safety state that tracks the evolution of safety constraints and dynamically penalize unsafe generations to ensure the generation of safe responses. Consequently, we demonstrate formal safety guarantees w.r.t. the given cost model upon solving the MDP in the latent space with sufficiently large penalties. Building on this foundation, we propose $\texttt{InferenceGuard}$, a practical implementation that safely aligns LLMs without modifying the model weights. Empirically, we demonstrate that $\texttt{InferenceGuard}$ effectively balances safety and task performance, outperforming existing inference-time alignment methods in generating safe and aligned responses. Our findings contribute to the advancement of safer LLM deployment through alignment at inference-time, thus presenting a promising alternative to resource-intensive, overfitting-prone alignment techniques like RLHF.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Lingpeng_Kong1
Submission Number: 6322
Loading