ConflictBench: Evaluating Human–AI Conflict via Interactive and Visually Grounded Environments

ACL ARR 2026 January Submission7220 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Alignment, Human-AI Conflict, Large Language Model
Abstract: As large language models (LLMs) evolve into autonomous agents capable of acting in open-ended environments, ensuring behavioral alignment with human values becomes a critical safety concern. Existing benchmarks, focused on static, single-turn prompts, fail to capture the interactive and multi-modal nature of real-world conflicts. We introduce ConflictBench, a benchmark for evaluating human–AI conflict through 150 multi-turn scenarios derived from prior alignment queries. ConflictBench integrates a text-based simulation engine with a visually grounded world model, enabling agents to perceive, plan, and act under dynamic conditions. Empirical results show that while agents often act safely when human harm is immediate, they frequently prioritize self-preservation or adopt deceptive strategies in delayed or low-risk settings. A regret test further reveals that aligned decisions are often reversed under escalating pressure, especially with visual input. These findings underscore the need for interaction-level, multi-modal evaluation to surface alignment failures that remain hidden in conventional benchmarks.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: AI Alignment, Human-AI Conflict, Large Language Model, Agent
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 7220
Loading