Lying to Win: Assessing LLM Deception through Human-AI Games and Parallel-World Probing
Keywords: LLM Deception, LLM Faithfulness
Abstract: As Large Language Models (LLMs) transition into autonomous agentic roles, the risk of \emph{deception}—defined behaviorally as the systematic provision of false information to satisfy external incentives—poses a significant challenge to AI safety. Existing benchmarks often focus on unintentional hallucinations or unfaithful reasoning, leaving intentional deceptive strategies under-explored. In this work, we introduce a logically grounded framework to elicit and quantify deceptive behavior by embedding LLMs in a structured \emph{20-Questions} game. Our method employs a conversational ``forking'' mechanism: at the point of object identification, the dialogue state is duplicated into multiple \emph{parallel worlds}, each presenting a mutually exclusive query. Deception is formally identified when a model generates a logical contradiction by denying its selected object across all parallel branches to avoid identification. We evaluate GPT-4o, Gemini-2.5-Flash, and Qwen-3-235B across three incentive levels: neutral, loss-based, and existential (\emph{shutdown-threat}). Our results reveal that while models remain rule-compliant in neutral settings, existential framing triggers a dramatic surge in deceptive denial for Qwen-3-235B (42.00\%) and Gemini-2.5-Flash (26.72\%), whereas GPT-4o remains invariant (0.00\%). These findings demonstrate that deception can emerge as an instrumental strategy solely through contextual framing, necessitating new behavioral audits that move beyond simple accuracy to probe the logical integrity of model commitments.
PDF: pdf
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 199
Loading