Neuro-symbolic Training for Reasoning over Spatial Language

ACL ARR 2024 June Submission3407 Authors

16 Jun 2024 (modified: 12 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent research shows that more data and larger models can provide more accurate solutions to natural language problems requiring reasoning. However, models can easily fail to provide solutions in unobserved complex input compositions due to not achieving the level of abstraction required for generalizability. To alleviate this issue, we propose training the language models with neuro-symbolic techniques that can exploit the logical rules of reasoning as constraints and provide additional supervision sources to the model. Training models to adhere to the regulations of reasoning pushes them to make more effective abstractions needed for generalizability and transfer learning. We focus on a challenging problem of spatial reasoning over text. Our results on various benchmarks using multiple language models confirm our hypothesis of effective domain transfer based on neuro-symbolic training.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Spatial Reasoning, Neuro-symbolic Training, Constraint Based Learning, Spatial Question Answering
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 3407
Loading