Keywords: large language models, inference-time guardrailing, model alignment, AI safety, guardrailing tax, instruction compilation
TL;DR: We introduce PrimeGuard, a method that dynamically routes queries to ensure LLM safety and helpfulness, validated with extensive testing and a new comprehensive benchmark.
Abstract: Deploying language models (LMs) necessitates outputs to be both high-quality and compliant with safety guidelines. Although Inference-Time Guardrails (ITG) offer solutions that shift model output distributions towards compliance, we find that current methods struggle in balancing safety with helpfulness. ITG Methods that safely address non-compliant queries exhibit lower helpfulness while those that prioritize helpfulness compromise on safety. We refer to this trade-off as the guardrail tax, analogous to the alignment tax.
To address this, we propose PrimeGuard, a novel ITG method that utilizes structured control flow. PrimeGuard routes requests to different self-instantiations of the LM with varying instructions, leveraging its inherent instruction-following capabilities and in-context learning. Our tuning-free approach dynamically compiles system-designer guidelines for each query.
We construct and release safe-eval, a diverse red-team safety benchmark. Extensive evaluations demonstrate that PrimeGuard, without fine-tuning, outperforms all competing baselines and overcomes the guardrail tax by improving the fraction of safe responses from 61\% to 97\% and increasing average helpfulness scores from 4.17 to 4.29 on the largest models, while reducing attack success rates from 100\% to 8\%.
Submission Number: 57
Loading