Keywords: Social Intelligence, Compromise Generation, Empathic Neutrality, Prompt Engineering
TL;DR: This paper presents a framework for enabling large language models to generate empathically neutral compromises between contrasting viewpoints through prompt engineering techniques
Abstract: Large language models often falter when asked to reconcile opposing views, revealing biases toward asymmetric compromises. We study whether they can instead produce compromises that respects both narratives without favoring either side. Given paired accounts about the same place type with contrasting perspectives, we generate multiple compromise candidates using prompt strategies guided by an empathy-oriented similarity model and select those with balanced similarity to both inputs. Evaluation by 50 participants on a subset of a corpus of 2,400 paired views showed that single prompt compromises tended to be biased towards one view while use of external neutrality evaluation during feedback-guided prompting yields more neutral compromises over single-prompt and chain-of-thought (CoT) baselines. We also examine whether alignment enables models to acquire neutrality heuristics despite initial bias. This task offers a compact probe of how models integrate conflicting cues and exhibit neutrality in social reasoning.
Submission Number: 89
Loading