PHALM: Building a Knowledge Graph from Scratch by Prompting Humans and a Language ModelDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: commonsense inference, crowdsourcing, knowledge graph, large language model
Abstract: Despite the remarkable progress in natural language understanding with pretrained Transformers, neural language models often do not have commonsense knowledge. Toward commonsense-aware models, there have been attempts to obtain knowledge, ranging from automatic acquisition to crowdsourcing. However, it is difficult to obtain a high-quality knowledge base at a low cost, especially from scratch. In this paper, we propose PHALM, a method of building a knowledge graph from scratch, by prompting both crowdworkers and a large language model. We used this method to build a Japanese event knowledge graph and trained Japanese neural commonsense models. Experimental results revealed the acceptability of the built graph and inferences generated by the trained models. We also report the difference in prompting humans and a language model.
Paper Type: long
Research Area: Information Extraction
0 Replies

Loading