Abstract: Large Language Models (LLMs) excel academically but struggle with social intelligence tasks. We present a framework for generating empathicly neutral compromises between opposing viewpoints to address this limitation. Using a dataset of 2,400 contrasting views from human participants, we develope compromise generation methods. To overcome data collection constraints, we utilize prompt engineering with Claude LLM to generate compromises, validated through a 50-participant user study. Models trained on this dataset via preference alignment demonstrated effective compromise generation across multiple metrics. This work establishes a scalable approach to enhance LLMs' social intelligence through neutral conflict resolution.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: social LLM
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: english
Submission Number: 5635
Loading