Keywords: representation learning, entities, few-shot, Wikipedia
TL;DR: We learn entity representations that can reconstruct Wikipedia categories with just a few exemplars.
Abstract: Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases. Motivated by the observation that efforts to code
world knowledge into machine readable knowledge bases tend to be entity-centric,
we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the contexts in which those entities were mentioned.
We show that large scale training of neural models allows us to learn extremely high fidelity entity typing information, which we demonstrate with few-shot reconstruction of Wikipedia categories. Our learning approach is powerful enough
to encode specialized topics such as Giro d’Italia cyclists.
3 Replies
Loading