Guiding Explanatory Inference through Inference Types

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Natural Language Inference, quasi-symbolic inference control, neuro-symbolic, representation learning
Abstract: This work investigates localised, quasi-symbolic inference behaviours in distributional representation spaces by focusing on Explanation-based Natural Language Inference (NLI), where two explanations (premises) are provided to derive a single conclusion. We first establish the connection between natural language and symbolic inferences by characterising quasi-symbolic NLI behaviours, named \textit{inference types}. Next, we establish the connection between distributional and symbolic inferences by formalising the Transformer NLI model as a rule-based neural NLI model - a \textit{quasi-symbolic NLI framework} where different inference behaviours are encoded as functionally separated subspaces in the latent parametric space. We perform extensive experiments which reveal that inference types can enhance model training and inference dynamics, and deliver localised, symbolic inference control, and latent inference-type disentanglement. Based on these findings, future work will probe the composition and generalisation of symbolic inference behaviour in distributional representation spaces.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 7028
Loading