Tuning Language Models with Spatial Logic for Complex Reasoning

Published: 25 Jun 2024, Last Modified: 02 Aug 2024ACL 2024 Workshop SpLU-RoboNLPEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spatial Reasoning, Neuro-symbolic Training, Constraint Based Learning, Spatial Question Answering
TL;DR: Proposed training the models with neuro-symbolic techniques that can exploit the logical rules of reasoning as constraints and provide transferable supervision sources to out-of-context domain, domain without constraints.
Abstract: Recent research shows that more data and larger models can provide more accurate solutions to natural language problems that require reasoning. However, models can easily fail to provide solutions in unobserved levels of compositional complexity because they might not obtain the level of abstraction needed for generalizability. To alleviate this issue, we propose to train the models with neuro-symbolic techniques that can exploit the logical rules of reasoning as constraints and provide additional supervision sources to the model. Training models to adhere to the regulations of reasoning pushes them to make more effective abstractions needed for generalizability and transfer learning. We focus on a challenging problem of spatial reasoning over text, and our results on multiple benchmarks confirm our hypothesis of effective domain transfer based on neuro-symbolic training. We utilize our symbolic training approach on multiple commonly used language models.
Submission Type: Long Paper (8 Pages)
Archival Option: This is a non-archival submission
Submission Number: 7
Loading