PreAdapter: Pre-training Language Models on Knowledge Graphs

Published: 01 Jan 2024, Last Modified: 20 May 2025ISWC (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pre-trained language models have demonstrated state-of-the-art performance in various downstream tasks such as summarization, sentiment classification, and question answering. Leveraging vast amounts of textual data during training, these models inherently hold a certain amount of factual knowledge, which is particularly beneficial for knowledge-driven tasks such as question answering. However, the knowledge implicitly contained within the language models is not complete. Consequently, many studies incorporate additional knowledge from Semantic Web resources such as knowledge graphs, which provide an explicit representation of knowledge in the form of triples.
Loading