Abstract: As large language models (LLMs) take on growing roles as automated evaluators in practical settings, a critical question arises: *Can individuals persuade an LLM judge to assign unfairly high scores?* This study is the first to reveal that strategically embedded persuasive language can bias LLM judges when scoring mathematical reasoning tasks, where correctness should be independent of stylistic variation.
Grounded in Aristotle’s rhetorical principles, we formalize seven persuasion techniques (*Majority*, *Consistency*, *Flattery*, *Reciprocity*, *Pity*, *Authority*, *Identity*) and embed them into otherwise identical responses. Across six math benchmarks, we find that persuasive language leads LLM judges to assign inflated scores to incorrect solutions, by up to 8\% on average, with *Consistency* causing the most severe distortion. Notably, increasing model size does not substantially mitigate this vulnerability. Further analysis demonstrates that combining multiple persuasion techniques amplifies the bias, and pairwise evaluation is likewise susceptible. Moreover, the persuasive effect persists under counter-prompting strategies, highlighting a critical vulnerability in LLM-as-a-Judge pipelines and underscoring the need for robust defenses against persuasion-based attacks.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: adversarial attacks/examples/training, robustness
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 1672
Loading