Keywords: Large Language Models, Inductive Reasoning, Survey
Abstract: Reasoning is an important task for large language models (LLMs).
Among all the reasoning paradigms, inductive reasoning is one of the basic types,
which is characterized by its particular-to-general thinking process and the non-uniqueness of its answers.
The inductive mode is crucial for knowledge generalization and aligns better with human cognition, so it is a fundamental mode of learning, hence attracting increasing interest.
Despite the importance of inductive reasoning, there is no systematic summary of it.
Therefore, this paper presents the first comprehensive survey of inductive reasoning for LLMs.
First, methods for improving inductive reasoning are categorized into three main areas: post-training enhancement, test-time exploration, and data augmentation.
Then, current benchmarks of inductive reasoning are summarized, and a unified sandbox-based evaluation approach with the observation coverage metric is derived.
Finally, we offer some analyses regarding the source of inductive ability and how simple model architectures and data help with inductive tasks, providing a solid foundation for future research.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: code generation and understanding, mathematical NLP (both are forms of indcutive reasoning); reasoning
Contribution Types: Surveys, Theory
Languages Studied: English
Submission Number: 1871
Loading