Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment

Published: 13 Jan 2025, Last Modified: 26 Feb 2025AAAI 2025 PDLM OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Judge Input Guardrails, LLM Moderation, LLM Supervised Fine-tuning, LLM Alignment, Chain-of-Thought LLM Prompting, User-Facing AI Agents, AI Agent Safety, Jailbreaks, Adversarial Attacks, Malicious Query Detection
TL;DR: Our paper proposes enhancing input moderation guardrails in user-facing conversational AI agents by utilizing Chain-of-Thought (CoT) fine-tuning and aligning LLM-as-a-judge methodologies.
Abstract: Large Language Models (LLMs) have demonstrated powerful capabilities that render them valuable in different applications, including conversational AI products. It is paramount to ensure the security and reliability of these products by mitigating their vulnerabilities towards malicious user interactions, which can lead to the exposure of great risks and reputational repercussions. In this work, we present a comprehensive study on the efficacy of fine-tuning and aligning Chain-of-Thought (CoT) responses of different LLMs that serve as input moderation guardrails. We systematically explore various tuning methods by leveraging a small set of training data to adapt these models as proxy defense mechanisms to detect malicious inputs and provide a reasoning for their verdicts, thereby preventing the exploitation of conversational agents. We rigorously evaluate the efficacy and robustness of different tuning strategies to generalize across diverse adversarial and malicious query types. Our experimental results outline the potential of alignment processes tailored to a varied range of harmful input queries, even with constrained data resources. These techniques significantly enhance the safety of conversational AI systems and provide a feasible framework for deploying more secure and trustworthy AI-driven interactions.
Submission Number: 32
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview