Abstract: This work investigates localised, quasi-symbolic inference behaviours in distributional representation spaces by focusing on Explanation-based Natural Language Inference (NLI), where two explanations (premises) are provided to derive a single conclusion. We first establish the connection between natural language and symbolic inferences by characterising quasi-symbolic NLI behaviours, named symbolic inference types. Next, we establish the connection between distributional and symbolic inferences by formalising the Transformer encoder-decoder NLI model as a rule-based neural NLI model - a quasi-symbolic NLI representation framework. We perform extensive experiments which reveal that symbolic inference types can enhance model training and inference dynamics, and deliver localised, symbolic inference control. Based on these findings, we conjecture the different inference behaviours are encoded as functionally separated subspaces in the latent parametric space, as the future direction to probe the composition and generalisation of symbolic inference behaviour in distributional representation spaces.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: Natural Language Inference, Quasi-symbolic NLI behaviour, localisation
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1866
Loading