TL;DR: The paper looks into how well Large Language Models can perform at conversational grounding, provides a novel technique to look into the underperformance of some models and works on improving their performance.
Abstract: In this paper, we explore the concept of conversational grounding in human dialogues, emphasizing its importance for effective communication, especially in spoken dialogues. Conversational grounding, vital for building dependable dialog systems, involves ensuring a mutual understanding of shared information. Despite its importance, there has been limited research on this aspect of conversation in recent years, especially after the advent of Large Language Models (LLMs). Previous studies, like those by (Benotti and Blackburn, 2021), highlighted the shortcomings of language models in conversational grounding but lacked a standardized benchmark for comparison. This gap in research becomes more significant considering the recent advancements in language models, which have led to new 'emergent' capabilities. Our study aims to evaluate the performance of Large Language Models (LLMs) in various aspects of conversational grounding, analyze why some models perform better than others, and propose ways to enhance the capabilities of the models that lag behind.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies
Loading