An Interpretable Answer Scoring Framework

Published: 31 May 2024, Last Modified: 18 Jun 2024Gen-IR_SIGIR24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, Knowledge graphs, Question intent
TL;DR: We propose an automatic scoring system for answer quality that is reliable, interpretable, and faithful.
Abstract: In this new LLM-world where users can ask any natural language question, the focus is on the generation of answers with reliable information while satisfying the original intent. LLMs are known to generate multiple versions of answers for the same question, some of which may be better than others. Identifying the most suitable response that adequately addresses the question is non-trivial. In order to tackle this problem, we propose an interpretable scoring system that considers three aspects of an answer: knowledge, content, and structure. We provide an answer quality score method that is explainable and can be a key signal to determining a good answer.
Submission Number: 7
Loading