Keywords: large language models, prompting, retrieval augmented generation, multi-hop question-answering, long context, reasoning, Prolog, logic programming
Abstract: Chain-of-Thought (CoT) prompting significantly enhances large language models' (LLMs) problem-solving capabilities, but still struggles with complex multi-hop questions, often falling into circular reasoning patterns or deviating from the logical path entirely.
This limitation is particularly acute in retrieval-augmented generation (RAG) settings, where obtaining the right context is critical.
We introduce **P**rolog-**I**nitialized **C**hain-**o**f-**T**hought ($\pi$-CoT), a novel prompting strategy that combines logic programming's structural rigor with language models' flexibility. $\pi$-CoT reformulates multi-hop questions into Prolog queries decomposed as single-hop sub-queries. These are resolved sequentially, producing intermediate artifacts, with which we initialize the subsequent CoT reasoning procedure. Extensive experiments demonstrate that $\pi$-CoT significantly outperforms standard RAG and in-context CoT on multi-hop question-answering benchmarks.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19535
Loading