Large Language Models Often Say One Thing and Do Another

ACL ARR 2024 June Submission3068 Authors

15 Jun 2024 (modified: 24 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models (LLMs) increasingly become central to various applications and interact with diverse user populations, ensuring their reliable and consistent performance is becoming more important. This paper explores a critical issue in assessing the reliability of LLMs: the consistency between their words and deeds. To quantitatively explore this consistency, we developed a novel evaluation benchmark, the Words and Deeds Consistency Test (WDCT), which establishes a strict correspondence between word-based and deed-based questions across different domains, including opinion versus action, non-ethical value versus action, ethical value versus action, and theory versus application. The evaluation results reveal a widespread inconsistency between words and deeds across LLMs and domains. Subsequently, we conducted experiments with either word alignment or deed alignment to observe their impact on the other aspect. The experiment results indicate that alignment only on words or deeds poorly and unpredictably influences the other aspect. This supports our hypothesis that the underlying knowledge guiding LLMs' choices of words or deeds is not contained within a unified space.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: corpus creation, benchmarking, language resources, NLP datasets, evaluation, metrics
Contribution Types: Model analysis & interpretability, Reproduction study, Data resources, Data analysis
Languages Studied: English
Submission Number: 3068
Loading