Abstract: Polite speech poses a fundamental alignment challenge for large language models (LLMs).
Humans deploy a rich repertoire of linguistic strategies to balance informational and social goals – from positive approaches that build rapport (compliments, expressions of interest) to negative strategies that minimize imposition (hedging, indirectness).
We investigate whether LLMs employ a similarly context-sensitive repertoire by comparing human and LLM responses in both constrained and open-ended production tasks.
We find that larger models ($\ge$70B parameters) successfully replicate key preferences from the computational pragmatics literature, and human evaluators surprisingly prefer LLM-generated responses in open-ended contexts.
However, further linguistic analyses reveal that models disproportionately rely on negative politeness strategies even in positive contexts, potentially leading to misinterpretations.
While modern LLMs demonstrate an impressive handle on politeness strategies, these subtle differences raise important questions about pragmatic alignment in AI systems.
Paper Type: Long
Research Area: Discourse and Pragmatics
Research Area Keywords: politeness, polite speech, politeness strategies, language production
Contribution Types: NLP engineering experiment, Reproduction study, Data analysis
Languages Studied: English
Submission Number: 5056
Loading