Keywords: Logical reasoning, Neurosymbolic reasoning, Explainable AI
TL;DR: We demonstrate three improvments to training an embedding model for theorem proving.
Abstract: Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate three approaches to training this model by increasing the occurrence of atoms with repeated terms, mutating anchor atoms to create positive and negative examples for use in triplet loss, and training with the “hardest” examples
Submission Number: 19
Loading