LMKG: A large-scale and multi-source medical knowledge graph for intelligent medicine applications

Published: 01 Jan 2024, Last Modified: 10 Apr 2025Knowl. Based Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Medical Knowledge Graph (KG) has shown great potential in various healthcare scenarios, such as drug recommendation and clinical decision support system. The factors that determine the role of a medical KG in practical applications are the scale, coverage, and quality of the medical knowledge it can provide. Most existing medical KGs are extracted from a single or a few information sources. However, medical knowledge extracted from insufficient information sources is usually highly incomplete or even biased, which results in a lack of data completeness and may lessen their effectiveness in real-world scenarios. Besides, the coverage of entity and relation types is inadequate in most previous works, which also might restrict their potential usage in future applications. In this paper, we build a unified system that can extract and manage medical knowledge from heterogeneous information sources. We first employ named entity recognition and relation extraction methods to extract knowledge triplets from medical texts. Then we propose a hierarchical entity alignment framework for further knowledge refinement. Based on our system, we construct a large-scale, high-quality, multi-source, and multi-lingual medical KG named LMKG, which includes 13 entity types and 17 relation types, and contains 403,784 entity and 1,225,097 relation instances. We conduct extensive experiments to evaluate the quality of LMKG. Experimental results show that LMKG can effectively enhance the performance of both upstream and downstream intelligent medicine applications. We have publicly released the KG resources and corresponding management service interface to facilitate research and applications in the medical field.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview