Bridging the Safety Gap: A Guardrail Pipeline for Trustworthy LLM Inferences

28 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: guardrail, safety, llm
Abstract: We present Wildflare GuardRail, a guardrail pipeline designed to enhance the safety and reliability of Large Language Model (LLM) inferences. Wildflare GuardRail integrates four key functional modules, including SAFETY DETECTOR, GROUNDING, CUSTOMIZER, and REPAIRER, and addresses safety challenges across multiple dimensions of LLM inferences. Wildflare GuardRail incorporates an unsafe content detection model that identifies issues such as toxicity, bias, and prompt injection, a hallucination detection model that identifies hallucinated LLM outputs and simultaneously provides explanations for the hallucinations, and a fixing model that corrects LLM outputs based on these explanations. Additionally, Wildflare GuardRail employs GROUNDINGto enrich user queries with relevant context, and utilizes CUSTOMIZERto allow users to define flexible protocols for handling specific safety requirements. Our experiments demonstrate that Wildflare GuardRail enhances safety and robustness in LLM inferences, offering adaptable and scalable solutions for LLM inferences.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13000
Loading