Keywords: Neurosymbolic AI, Long-Context Reasoning, Cross-Lingual Retrieval, Multi-Target Inference, Prompting Strategies
TL;DR: The paper introduces NeuroSymbolic Augmented Reasoning (NSAR), a method combining neural and symbolic reasoning that significantly enhances multilingual, long-context reasoning performance of large language models.
Track: Neurosymbolic Generative Models
Abstract: Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce {NeuroSymbolic Augmented Reasoning (NSAR)}, which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.
Paper Type: Long Paper
Submission Number: 23
Loading