Is Knowledge Embedding Fully Exploited in Language Understanding? An Empirical StudyDownload PDF

Anonymous

16 May 2021 (modified: 05 May 2023)ACL ARR 2021 May Blind SubmissionReaders: Everyone
Abstract: The recent development of knowledge embedding (KE) enables machines to represent knowledge graphs (KGs) with low-dimensional embeddings, which facilitates utilizing KGs for various downstream natural language understanding (NLU) tasks. However, less work has been done on systematically evaluating the impact of KE on NLU. In this work, we conduct a comprehensive analysis of utilizing KE on four downstream knowledge-driven NLU tasks using two representative knowledge-guided frameworks, including knowledge augmentation and knowledge attention. From the experimental results, we find that: (1) KE models that have better performance on knowledge graph completion do not necessarily help knowledge-driven NLU tasks better in the knowledge-guided frameworks; (2) KE could effectively benefit NLU tasks from two aspects including entity similarity and entity relation information; (3) KE could further benefit pre-trained language models which have already learned rich knowledge from pre-training. We hope the results could help and guide future studies to utilize KE in NLU tasks. Our source code will be released to support further exploration.
Software: zip
0 Replies

Loading