Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering
Keywords: Evidence-Enhanced, Hallucination Alleviation, Generative Question Answering
Abstract: To
address the hallucination in generative question answering (GQA) where the answer can not be derived from the document, we propose a novel evidence-enhanced triplet generation framework,
EATQA, encouraging the model to
predict all the combinations of ⟨Question, Evidence, Answer⟩ triplet
by flipping the source pair and the target label
to understand their logical relationships, i.e.,
predict Answer(A), Question(Q), and Evidence(E) given a QE, EA, and QA
pairs, respectively. Furthermore, we bridge the distribution gap to distill the knowledge from evidence in inference stage. Our framework ensures the model to learn the logical relation between query, evidence and answer, which simultaneously improves the evidence generation and query answering. In this paper, we apply EATQA to LLama and it outperforms other LLMs-based methods and hallucination mitigation approaches on two challenging GQA benchmarks. Further analysis shows that our method not only keeps prior knowledge within LLM, but also mitigates hallucination and generates faithful answers.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6061
Loading