Learning Relation Representations from Word RepresentationsDownload PDF

17 Nov 2018 (modified: 05 Feb 2020)AKBC 2019 Conference Blind SubmissionReaders: Everyone
  • TL;DR: Identifying the relations that connect words is important for various NLP tasks. We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations.
  • Archival Status: Archival
  • Subject Areas: Natural Language Processing
  • Keywords: Relation representations, relation embeddings
  • Abstract: Identifying the relations that connect words is an important step towards understanding human languages and is useful for various NLP tasks such as knowledge base completion and analogical reasoning. Simple unsupervised operators such as vector offset between two-word embeddings have shown to recover some specific relationships between those words, if any. Despite this, how to accurately learn generic relation representations from word representations remains unclear. We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations. We propose a method for learning relation representations using a feed-forward neural network that performs relation prediction. Our evaluations on two benchmark datasets reveal that the penultimate layer of the trained neural network-based relational predictor acts as a good representation for the relations between words.
8 Replies

Loading