Keywords: large language models, document inconsistency detection, evaluation metrics
Abstract: Large language models (LLMs) are becoming useful in many domains due to their impressive abilities that arise from large training datasets and large model sizes. However, research on LLM-based approaches to document inconsistency detection is relatively limited. There are two key aspects of document inconsistency detection: (i) classification of whether there exists any inconsistency, and (ii) providing evidence of the inconsistent sentences. We focus on the latter, and introduce new comprehensive evidence-extraction metrics and a redact-and-retry framework with constrained filtering that substantially improves LLM-based document inconsistency detection over direct prompting. We back our claims with promising experimental results.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: NLP Applications, Resources and Evaluation
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: english
Submission Number: 1269
Loading