ALRPHFS:Adversarially Learned Risk Patterns with Hierarchical Fast & Slow Reasoning for Robust Agent Defense

ACL ARR 2025 May Submission7162 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: LLM Agents are becoming central to intelligent systems. However, their deployment raises serious safety concerns. Existing defenses largely rely on "Safety Checks", which struggle to capture the complex semantic risks posed by harmful user inputs or unsafe agent behaviors—creating a significant semantic gap between safety checks and real-world risks.To bridge this gap, we propose a novel defense framework, ALRPHFS (Adversarially Learned Risk Patterns with Hierarchical Fast  & Slow Reasoning). ALRPHFS consists of two core components: (1) an offline adversarial self-learning loop to iteratively refine a generalizable and balanced library of risk patterns, substantially enhancing robustness without retraining the base LLM, and (2) an online hierarchical fast & slow reasoning engine that balances detection effectiveness with computational efficiency. Experimental results demonstrate that our approach achieves superior overall performance compared to existing baselines, achieving a best‑in‑class average accuracy of 80\% and exhibiting strong generalizability across agents and tasks.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: LLM Agents Safety
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 7162
Loading