Relevance in Dialogue: Is Less More? An Empirical Comparison of Existing Metrics, and a Novel Simple Metric
Keywords: dialogue, dialogue relevance, dialogue evaluation, domain generalization, negative sampling, IDK
TL;DR: In this work, we evaluate various methods for measuring dialogue relevance, find them to be sensitive to the dataset, and propose a new metric with better overall performance and reduced dataset sensitivity
Abstract: In this work, we evaluate various existing dialogue relevance metrics, find strong dependency on the dataset, often with poor correlation with human scores of relevance, and propose modifications to reduce data requirements and domain sensitivity while improving correlation. Our proposed metric achieves state-of-the-art performance on the HUMOD dataset while reducing measured sensitivity to dataset by 37%-66%. We achieve this without fine-tuning a pretrained language model, and using only 3,750 unannotated human dialogues and a single negative example. Despite these limitations, we demonstrate competitive performance on four datasets from different domains. Our code, including our metric and experiments, is open sourced.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/relevance-in-dialogue-is-less-more-an/code)
0 Replies
Loading