KnowDomain: Self Knowledge Generative Prompting for Large Language Models in Zero-Shot Domain-Specific QA
Abstract: In recent years, Large Language Models (LLMS) have exhibited remarkable proficiency in comprehending and generating language.
Consequently, LLMs have become an integral part of AI system building. However, it has been observed that in the case of domain-specific QA (DSQA), direct prompting techniques do not fully leverage the capabilities of LLMs, especially in the case of a zero-shot setting, due to the scarcity of annotated data and the nonavailability of tailored retrieval data. To address this gap, we propose a self-knowledge generative prompting technique for DSQA that generates the necessary knowledge for accurate responses using LLMs in a zero-shot setting. By experimenting with LLMs of varied size ranging from 3.8B to 70B, we demonstrate significant improvements in the results, with marginal gains of over 4% to 10% on various datasets and even improving domain-specific models. Our code is attached to the submission.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Large Language Models, Question Answering, Zero-Shot QA
Languages Studied: English
Submission Number: 7578
Loading