Keywords: biomedical entity linking, language models, knowledge graphs, contrastive learning, representation learning, biomedical entity representation
TL;DR: We introduce a novel biomedical entity representation model which utilizes a language model and a graph neural network to fuse textual and graph modalities of a biomedical knowledge graph.
Abstract: Modern biomedical concept representations are mostly trained on synonymous concept names from a biomedical knowledge base, ignoring the inter-concept interactions and a concept's local neighborhood in a knowledge base graph. In this paper, we introduce Biomedical Entity Representation with a Graph-Augmented Multi-Objective Transformer (BERGAMOT), which adopts the power of pre-trained language models (LMs) and graph neural networks to capture both inter-concept and intra-concept interactions from the multilingual UMLS graph. We apply contrastive loss on textual and graph representations to make them less sensitive to surface forms and enable intermodal knowledge exchange between two uni-modal encoders. BERGAMOT achieves state-of-the-art results in zero-shot entity linking without task-specific supervision on three monolingual datasets and Mantra multilingual benchmark. This work is an abridge version of our recent paper~\citep{sakhovskiy-etal-2024-biomedical}.
Submission Number: 65
Loading