Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering

ACL ARR 2024 June Submission4911 Authors

16 Jun 2024 (modified: 06 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Open-ended question answering requires models to find appropriate evidence to form well-reasoned, comprehensive and helpful answers. In practical applications, models also need to engage in extended discussions on potential scenarios closely relevant to the question. With augmentation of retrieval module, open-source Large Language Models~(LLMs) can produce coherent answers often with different focuses, but are still sub-optimal in terms of reliable evidence selection and in-depth question analysis. In this paper, we propose a novel Chain-of-Discussion framework to leverage the synergy among multiple open-source LLMs aiming to provide *more correct* and *more comprehensive* answers for open-ended QA, although they are not strong enough individually. Our experiments show that discussions among multiple LLMs play a vital role in enhancing the quality of answers. We will release our data and code for further research.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: chain-of-discussion, complex evidence-based question answering, multi-model collaboration
Contribution Types: NLP engineering experiment
Languages Studied: Chinese
Submission Number: 4911
Loading