Keywords: Graph Neural Network, heterogeneous graphs, generalized zero shot learning
TL;DR: We propse that using medical knowledge graphs in conjunction with prevalent word embeddin models is beneficial for multi-modal medical imaging tasks.
Abstract: Medical knowledge graphs (KG) are a source of highly granular, semantically rich, and curated medical ontologies. However, they have seen limited adoption in various multi-modal medical imaging tasks owing to their sheer size. Instead, semantic label embeddings from language models such as BERT and word2vec are currently employed. These embeddings are derived from word co-occurrences and encode rich semantic associations. However, they lack explicit relational information which KG intrinsically encode. On the other hand, the expressive power of KG is limited by its parsed size. In view of these observations, we propose a way to learn KG embeddings on the parsed heterogeneous graph and complement it with language embeddings. We test our hypothesis on generalized zero-shot learning of chest radiographs.
1 Reply
Loading