Keywords: Biomedical Question Answering, LLM, QA Pairs
TL;DR: camera ready submission
Abstract: Large Language Models (LLMs) have shown considerable success in open-domain question answering (ODQA). Nevertheless, their performance in specialized fields such as healthcare remains suboptimal due to insufficient domainspecific knowledge. While integrating retrieved documents as in-context examples offers some improvement, it is often inadequate. In this study, we introduce a novel approach to enhance Biomedical ODQA by utilizing question-answer (QA) pairs generated from PubMed abstracts using LLM. We prompt an LLM with these QA pairs as in-context examples for four biomedical question types: yes/no, factoid, list, and summary. Our method outperforms document retrieval in factoid and list type question, matches performance in the other two, while significantly reducing inference latency across all four. We also provide detailed empirical analysis to support the effectiveness of QA pairs in boosting performance.
Submission Number: 45
Loading