everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Pre-trained Language Models (LMs) have given a significant performance growth in a variety of language-related texts in biomedical domain. However, existing biomedical LLMs demonstrate a limited understanding of complex, domain-specific concept structure and the factual information stored in biomedical Knowledge Graphs (KGs). We propose \textbf{GRABLI} (Knowledge \textbf{Gra}ph and \textbf{B}iomedical Language Model A\textbf{li}gnment), a novel pre-training method that enriches an LM with external knowledge by simultaneously learning a separate KG encoder and aligning LM and graph representations. Given a textual sequence, we normalize biomedical concept mentions to the Unified Medical Language System (UMLS) KG and use the local KG subgraphs as cross-modal positive samples for mentioned concepts. Our empirical results demonstrate that applying our proposed method to various state-of-the-art biomedical LMs including PubMedBERT and BioLinkBERT, enhances their performance on diverse language understanding tasks, even after brief pre-training on a small alignment dataset derived from PubMed scientific abstracts.