Guiding Explanation-based NLI through Symbolic Inference Types

ACL ARR 2025 May Submission3750 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This work investigates localised, quasi-symbolic inference behaviours in distributional representation spaces by focusing on Explanation-based Natural Language Inference (NLI), where two explanations (premises) are provided to derive a single conclusion. We first establish the connection between natural language and symbolic inferences by characterising quasi-symbolic NLI behaviours, named symbolic inference types. Next, we establish the connection between distributional and symbolic inferences by formalising the Transformer encoder-decoder NLI model as a rule-based neural NLI model - a quasi-symbolic NLI conceptual framework. We perform extensive experiments which reveal that symbolic inference types can enhance model training and inference dynamics, and deliver localised, symbolic inference control. Based on these findings, we conjecture the different inference behaviours are encoded as functionally separated subspaces in the latent parametric space, as the future direction to probe the composition and generalisation of symbolic inference behaviour in distributional representation spaces.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: Quasi-symbolic NLI, localisation
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Keywords: Quasi-symbolic NLI, localisation
Submission Number: 3750
Loading