Exploiting Reasoning Patterns in Language Models for Indirect Targeted Poisoning

17 Sept 2025 (modified: 09 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, AI Security, Data Poisoning, Language Model Alignment, Adversarial Attacks
Abstract: Chain-of-Thought (CoT) prompting has emerged as a powerful technique for enhancing large language models' reasoning capabilities by generating intermediate reasoning steps for complex tasks. A common practice for equipping LLMs with reasoning is to fine-tune pre-trained models using CoT datasets from public repositories like HuggingFace, which creates new attack vectors targeting the reasoning traces themselves. While prior works have shown the possibility of mounting backdoor attacks in CoT based models, these attacks require explicit inclusion of triggered queries with flawed reasoning and incorrect answers in the training set to succeed. Our work unveils a new class of "indirect targeted poisoning" attacks in reasoning models that manipulate responses of a target task by transferring CoT traces learned from a different task. Our "thought-transfer" attack can influence the LLM output on a target task by manipulating only the training samples' CoT traces—while leaving the queries and answers unchanged, resulting in a form of undetectable ``clean label'' poisoning. Unlike prior targeted poisoning attacks that explicitly require target task samples in the poisoned data, we demonstrate that thought-transfer achieves 70\%+ success rates in injecting targeted behaviors into entirely different domains that are never present in training. Remarkably, training on poisoned reasoning data also improves the model's performance by 10-15\% on multiple benchmarks, providing incentives for a user to use our poisoned reasoning dataset. Our findings reveal a novel threat vector enabled by reasoning models, which is not easily defended by existing mitigations.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 9972
Loading