Logically Consistent Language Models via Neuro-Symbolic Integration

Published: 10 Oct 2024, Last Modified: 10 Oct 2024Sys2-Reasoning PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: logical consistency, factuality, probabilistic reasoning, language models, neuro-symbolic, knowledge editing
TL;DR: We propose an objective based on principled probabilistic reasoning to improve LMs in factuality and logical consistency. Our method is agnostic to the chosen logical constraints and it yields consistent improvements in a low-data regime.
Abstract: Large language models (LLMs) are a promising venue for natural language understanding and generation. However, current LLMs are far from reliable: they are prone to generating non-factual information and, more crucially, to contradicting themselves when prompted to reason about relations between entities of the world. These problems are currently addressed with large scale fine-tuning or by delegating reasoning to external tools. In this work, we strive for a middle ground and introduce a loss based on neuro-symbolic reasoning that teaches an LLM to be logically consistent with an external set of facts and rules and improves self-consistency even when the LLM is fine-tuned on a limited set of facts. Our approach also allows to easily combine multiple logical constraints at once in a principled way, delivering LLMs that are more consistent w.r.t. all constraints and improve over several baselines w.r.t. a given constraint. Moreover, our method allows LLMs to extrapolate to unseen but semantically similar factual knowledge, represented in unseen datasets, more systematically.
Submission Number: 68
Loading