Towards Neural Theorem Proving at ScaleDownload PDF

Anonymous

Published: 29 Jun 2018, Last Modified: 05 May 2023NAMPI 2018Readers: Everyone
Abstract: Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover model proposed by Rocktäschel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results -- this quickly becomes infeasible even for small Knowledge Bases. We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable Knowledge Bases.
TL;DR: We propose a method for sensibly reducing Neural Theorem Provers' computational complexity, by reducing parts of the inference process to Nearest Neighbours Search problems.
Keywords: Neural Theorem Provers, Approximate Nearest Neighbours, Neural-Symbolic Integration, Program Induction
3 Replies

Loading