Keywords: Analogical reasoning, Large Language Models
TL;DR: We propose a novel Agentic Analogical Reasoning (AAR) paradigm for LLM reasoning. This paradigm treats the LLM as an agentic reasoner to integrate multi-turn insights along the reasoning trajectory.
Abstract: Analogical reasoning helps humans grasp new concepts by relating them to familiar
ones. Recent work seeks to improve LLM reasoning by prompting analogical cor-
respondences with semantically related scenarios. However, existing approaches
are single-turn reasoning and may generate unreliable analogical instances, which
restricts their effectiveness in complex reasoning tasks. To address these limita-
tions, we propose a novel Agentic Analogical Reasoning (AAR) paradigm for
LLM reasoning. This paradigm treats the LLM as an agentic reasoner to integrate
multi-turn insights along the reasoning trajectory of iteratively generating analogi-
cal queries to trigger internal or external knowledge for analogical exemplification,
and selectively identifying appropriate analogies to conduct further reasoning. To
equip LLMs with AAR capability, we design an analogical trajectory optimization
algorithm including analogical trajectory generation and re-weighted trajectory
training. Furthermore, a mixed training strategy is devised to progressively inter-
nalize agentic analogical reasoning as an intrinsic capability of LLMs. Finally, we
conduct extensive experiments on seven reasoning-intensive datasets and achieve
significant performance improvements over prior state-of-the-art (SOTA) methods.
The code is available at https://anonymous.4open.science/r/ICLR-8381.
Primary Area: generative models
Submission Number: 8381
Loading