An Evaluation of Approaches to Train Embeddings for Logical Inference

AAAI 2025 Workshop NeurMAD Submission19 Authors

10 Dec 2024 (modified: 14 Feb 2025)AAAI 2025 Workshop NeurMAD SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Logical reasoning, Neurosymbolic reasoning, Explainable AI
TL;DR: We demonstrate three improvments to training an embedding model for theorem proving.
Abstract: Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate three approaches to training this model by increasing the occurrence of atoms with repeated terms, mutating anchor atoms to create positive and negative examples for use in triplet loss, and training with the “hardest” examples
Submission Number: 19
Loading