Context Does Matter: Implications for Crowdsourced Evaluation Labels in Task-Oriented Dialogue SystemsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: In this paper we investigate the effect of dialogue context in the consistency of crowdsourced evaluation labels
Abstract: Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems (TDS. Obtaining high-quality and consistent ground-truth labels from annotators presents challenges. When evaluating a TDS, annotators must fully comprehend the dialogue before providing judgments. Previous studies suggest using only a portion of the dialogue context in the annotation process. However, the impact of this limitation on label quality remains unexplored. This study investigates the influence of dialogue context on annotation quality, considering the truncated context for relevance and usefulness labeling. We further propose to use the LLMs to summarize the dialogue context to provide a rich and short description of the dialogue context and study the impact of doing so on the annotator's performance. Reducing context leads to more positive ratings. Conversely, providing the entire dialogue context yields higher-quality relevance ratings but introduces ambiguity in usefulness ratings. Using the first user utterance as context leads to consistent ratings, akin to those obtained using the entire dialogue, with significantly reduced annotation effort. Our findings show how task design, particularly the availability of dialogue context, affects the quality and consistency of crowdsourced evaluation labels.
Paper Type: long
Research Area: Information Retrieval and Text Mining
Contribution Types: Data analysis
Languages Studied: English
0 Replies

Loading