Combining Analogy with Language Models for Knowledge ExtractionDownload PDF

Published: 31 Aug 2021, Last Modified: 05 May 2023AKBC 2021Readers: Everyone
Keywords: relation extraction, analogy, common-sense, few-shot learning, language models
TL;DR: Combines language models with analogical learning for extracting common sense facts from web text from a few examples.
Abstract: Learning structured knowledge from natural language text has been a long-standing challenge. Previous work has focused on specific domains, mostly extracting knowledge about named entities (e.g. countries, companies, or persons) instead of general-purpose world knowledge (e.g. information about science or everyday objects). In this paper we combine the Companion Cognitive Architecture with the BERT Language Model to extract structured knowledge from text, with the goal of automatically inferring missing commonsense facts from an existing knowledge base. Using the principles of distant supervision, the system learns functions called query cases that map statements expressed in natural language into knowledge base relations. Afterwards, the system uses such query cases to extract structured knowledge using analogical reasoning. We run experiments on 2,679 Simple English Wikipedia articles, where the system is able to learn high precision facts about a variety of subjects from a few training examples, outperforming strong baselines.
Subject Areas: Knowledge Representation, Semantic Web and Search, Information Extraction
Archival Status: Archival
Supplementary Material: zip
4 Replies

Loading