$\pi$-CoT: Prolog-Initialized Chain-of-Thought Prompting for Multi-Hop Question-Answering

ICLR 2026 Conference Submission19535 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, prompting, retrieval augmented generation, multi-hop question-answering, long context, reasoning, Prolog, logic programming
Abstract: Chain-of-Thought (CoT) prompting significantly enhances large language models' (LLMs) problem-solving capabilities, but still struggles with complex multi-hop questions, often falling into circular reasoning patterns or deviating from the logical path entirely. This limitation is particularly acute in retrieval-augmented generation (RAG) settings, where obtaining the right context is critical. We introduce Prolog-Initialized Chain-of-Thought ($\pi$-CoT), a novel prompting strategy that combines logic programming's structural rigor with language models' flexibility. $\pi$-CoT reformulates multi-hop questions into Prolog queries decomposed as single-hop sub-queries, which are resolved systematically through SLICE---a procedure that seamlessly bridges symbolic and neural reasoning by translating each sub-query to natural language for LLM-powered question-answering, then converting answers back to an evolving knowledge base. By grounding each retrieval step in Prolog's systematic query resolution, we maintain a focused reasoning trajectory used to initialize the final CoT reasoning step. Extensive experimental evaluation demonstrates that $\pi$-CoT significantly outperforms RAG and in-context baselines on multi-hop question-answering benchmarks.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19535
Loading