Modelling Monotonic and Non-Monotonic Attribute Dependencies with Embeddings: A Theoretical AnalysisDownload PDF

Published: 31 Aug 2021, Last Modified: 05 May 2023AKBC 2021Readers: Everyone
Keywords: Entity embeddings, learning & reasoning, theoretical limitations
TL;DR: We theoretically analyse whether logical dependencies between attributes can be modelled in terms of entity and attribute embeddings.
Abstract: During the last decade, entity embeddings have become ubiquitous in Artificial Intelligence. Such embeddings essentially serve as compact but semantically meaningful representations of the entities of interest. In most approaches, vectors are used for representing the entities themselves, as well as for representing their associated attributes. An important advantage of using attribute embeddings is that (some of the) semantic dependencies between the attributes can thus be captured. However, little is known about what kinds of semantic dependencies can be modelled in this way. The aim of this paper is to shed light on this question, focusing on settings where the embedding of an entity is obtained by pooling the embeddings of its known attributes. Our particular focus is on studying the theoretical limitations of different embedding strategies, rather than their ability to effectively learn attribute dependencies in practice. We first show a number of negative results, revealing that some of the most popular embedding models are not able to capture even basic Horn rules. However, we also find that some embedding strategies are capable, in principle, of modelling both monotonic and non-monotonic attribute dependencies.
Subject Areas: Knowledge Representation, Semantic Web and Search, Machine Learning
Archival Status: Archival
7 Replies

Loading