Abstract: Large Language Models (LLMs) excel in general domains but lack specialized knowledge. Existing methods use external annotated data to enhance LLMs, which is resource-intensive. We propose a novel framework for LLM's self-evolution in specialized domains using ontology-driven knowledge extraction and enhancement. We introduce BeliefConf, a metric to quantify the model's confidence in knowledge paths, and our method of the Automated Path Annotation Mechanism (APAM) helps identify Enhanced Paths for targeted training. Experiments show that our method outperforms the base model (Llama3-8B-instruct) on 3 out of 6 medical datasets (PubMedQA, MedQA, USMLE-step1) and achieves state-of-the-art performance on PubMedQA without external training data, surpassing models like Llama3-Med42-8B.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: healthcare applications, clinical NLP, knowledge graphs
Languages Studied: English
Submission Number: 8287
Loading