Cognitive Analysis Graph–Guided Multi-Turn Safety Enhancement for Large Language Models

ACL ARR 2026 January Submission8125 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Safety alignment
Abstract: Large Language Models exhibit advanced reasoning capabilities that enable them to address complex tasks, but these capabilities also increase the risk of generating harmful content, particularly in multi-turn dialogues. Existing inference-phase safety alignment methods face three major challenges. First, they lack the relationship consideration between question and response, making the model easy to provide harmful content toward complex scenarios. Second, they are difficult to adapt to defense instruction. Third, these methods fail to effectively leverage historical information for safe response generation. To address these challenges, we propose CogGSE, an inference-time safety alignment framework that explicitly models the cognitive process of problem solving through a structured cognitive analysis graph. We retrieve a question-specific graph to ensure the safety information is tailored to the query. To fully exploit historical information in multi-turn settings, we retrieve relevant graphs from previous turns and selectively retain safety-related nodes, which are jointly used with the current-turn graph to guide safe response generation. This design enables transparent, controllable reasoning while maintaining strong safety guarantees. Extensive experiments demonstrate the effectiveness of our approach in multiple safety scenarios.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: safety and alignment
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 8125
Loading